You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(28) |
Jun
(2) |
Jul
(10) |
Aug
(1) |
Sep
(7) |
Oct
|
Nov
(1) |
Dec
(7) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(5) |
Feb
(7) |
Mar
(10) |
Apr
(12) |
May
(30) |
Jun
(21) |
Jul
(19) |
Aug
(17) |
Sep
(25) |
Oct
(46) |
Nov
(14) |
Dec
(11) |
| 2009 |
Jan
(5) |
Feb
(36) |
Mar
(17) |
Apr
(20) |
May
(75) |
Jun
(143) |
Jul
(29) |
Aug
(41) |
Sep
(38) |
Oct
(71) |
Nov
(17) |
Dec
(56) |
| 2010 |
Jan
(48) |
Feb
(31) |
Mar
(56) |
Apr
(24) |
May
(7) |
Jun
(18) |
Jul
(2) |
Aug
(34) |
Sep
(17) |
Oct
(1) |
Nov
|
Dec
(18) |
| 2011 |
Jan
(12) |
Feb
(19) |
Mar
(25) |
Apr
(11) |
May
(26) |
Jun
(16) |
Jul
(2) |
Aug
(10) |
Sep
(8) |
Oct
(1) |
Nov
|
Dec
(5) |
| 2012 |
Jan
(1) |
Feb
(3) |
Mar
(3) |
Apr
|
May
(2) |
Jun
|
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
(2) |
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(1) |
|
2
|
3
(1) |
4
|
5
(2) |
6
(1) |
7
|
8
|
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
|
16
|
17
|
18
|
19
|
20
(4) |
21
(2) |
22
|
|
23
|
24
|
25
|
26
|
27
(1) |
28
|
29
|
|
30
|
31
|
|
|
|
|
|
|
From: Nicholas G. <ngo...@dy...> - 2011-01-27 00:30:04
|
Splunk has generously offered to host our next San Francisco meetup: 250 Brannan San Francisco, CA 94107 Details, and to RSVP please go to meetup: http://www.meetup.com/San-Francisco-Eigenbase-Developers/calendar/16200530/ Informal, mainly social oriented networking and chit chat around our favorite "data management framework" and column store database. In addition to simply getting caught up we'll do our consistent "unconference" style presentations. 10-15 minute presentations, voted for at the start of the meeting. Potential topics listed below, however bring your own topics and all is fair game for discussion. a) Proposed Eigenbase migration from P4 to some set of GIT/SVN/... b) DynamoNETWORK update (and *maybe* a demo) c) Firewater update (nick + jvs) d) Pentaho plugins (overview of integrations with Open Source BI tools) Look forward to seeing you all there! Nick PS - They're starting to get to know me by first name on these Wednesday Seattle-SFO flights. :) |
|
From: John S. <js...@gm...> - 2011-01-21 05:18:20
|
We don't currently have anything like this. It would not be too hard to add a global performance counter to show number of rows loaded (across all tables). However, I'm not sure what to do about the indexing phase, which is done entirely separately after the row loading phase, and which can take a significant amount of time. So unless the table had no indexes at all, your progress bar would climb smoothly, and then stick for a long time while the indexing was being done. It's harder to come up with a simple "counter" for index update since it involves sorting and bitmap merge, which aren't in terms of rows at all. Thoughts on this? JVS On Thu, Jan 20, 2011 at 4:00 PM, Aris Setyawan <ari...@gm...> wrote: > Hi, > > I'm new to LucidDB. > > Can I access undo log record to know processed record count in bulk > loading with LucidDB? > > I need it to make a progress bar in "export-import dbf module" in my > application. Currently, I use "show innodb status" in mysql Innodb, > but the bulk loading and aggregate query is slow. I want to try > LucidDB because of the dbf file imported to database have numerous > column, same with the database table. > > -Aris > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
|
From: Aris S. <ari...@gm...> - 2011-01-21 00:00:39
|
Hi, I'm new to LucidDB. Can I access undo log record to know processed record count in bulk loading with LucidDB? I need it to make a progress bar in "export-import dbf module" in my application. Currently, I use "show innodb status" in mysql Innodb, but the bulk loading and aggregate query is slow. I want to try LucidDB because of the dbf file imported to database have numerous column, same with the database table. -Aris |
|
From: John S. <js...@gm...> - 2011-01-20 23:43:05
|
It's useful for benchmarking. JVS On Thu, Jan 20, 2011 at 3:13 PM, Nicholas Goodman <ngo...@dy...> wrote: > I'm curious what your thought process is on this. I'm trying to determine what the benefit of doing this would be. I can only think that you're trying to work around a bug but I don't know of any open issues in this regard. > > Nick > > On Jan 20, 2011, at 2:41 PM, Michael <nan...@ya...> wrote: > >> >> Hi all, >> >> Is there someway we could clear the buffers in LucidDB? >> >> Thanks, >> Mike >> -- >> View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html >> Sent from the luciddb-users mailing list archive at Nabble.com. >> >> ------------------------------------------------------------------------------ >> Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! >> Finally, a world-class log management solution at an even better price-free! >> Download using promo code Free_Logger_4_Dev2Dev. Offer expires >> February 28th, so secure your free ArcSight Logger TODAY! >> http://p.sf.net/sfu/arcsight-sfd2d >> _______________________________________________ >> luciddb-users mailing list >> luc...@li... >> https://lists.sourceforge.net/lists/listinfo/luciddb-users > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
|
From: Nicholas G. <ngo...@dy...> - 2011-01-20 23:37:00
|
I'm curious what your thought process is on this. I'm trying to determine what the benefit of doing this would be. I can only think that you're trying to work around a bug but I don't know of any open issues in this regard. Nick On Jan 20, 2011, at 2:41 PM, Michael <nan...@ya...> wrote: > > Hi all, > > Is there someway we could clear the buffers in LucidDB? > > Thanks, > Mike > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users |
|
From: John S. <js...@gm...> - 2011-01-20 23:01:10
|
The guaranteed way is to restart the server. LucidDB uses direct I/O, so nothing is cached by the OS (although of course lower-level caching such as disk controller can always be present). If you want to avoid restarting the server, you can change system parameter "cachePagesInit" to a low number (like 20) and then set it back to its original setting. However, this is dangerous, since if you go too low, you can end up with an unusable system. And for any non-zero value, those last few buffers (e.g. 20*32K) won't be discarded. So bouncing the server is a lot safer and guaranteed. http://pub.eigenbase.org/wiki/LucidDbBufferPoolSizing JVS On Thu, Jan 20, 2011 at 2:41 PM, Michael <nan...@ya...> wrote: > > Hi all, > > Is there someway we could clear the buffers in LucidDB? > > Thanks, > Mike > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)! > Finally, a world-class log management solution at an even better price-free! > Download using promo code Free_Logger_4_Dev2Dev. Offer expires > February 28th, so secure your free ArcSight Logger TODAY! > http://p.sf.net/sfu/arcsight-sfd2d > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
|
From: Michael <nan...@ya...> - 2011-01-20 22:41:21
|
Hi all, Is there someway we could clear the buffers in LucidDB? Thanks, Mike -- View this message in context: http://luciddb-users.1374590.n2.nabble.com/Clear-buffers-tp5945721p5945721.html Sent from the luciddb-users mailing list archive at Nabble.com. |
|
From: Nicholas G. <ngo...@dy...> - 2011-01-05 18:34:44
|
On Jan 5, 2011, at 2:44 AM, lynn_19840516 wrote: > Firstly, I'd like to thank LucidDB's developers! The performance is > mind-blowing, especially coming from a conventional RDBMS. We're glad you like it! When you get to the end of your initial project would you be willing to share some of these "mind-blowing" stats compared to your original RDBMS? We're trying to collect this information so potential users can see what kind of "real world" improvements. > 0: jdbc:luciddb:http://localhost> select * from bills; > +----------+-------+--------+--------------+------------+---------+------------+ > > | bill_id | type | state | from_entity | to_entity | holder | approver > 0: jdbc:luciddb:http://localhost> SELECT * from bills where bill_id =1; > Error: From line 1, column 27 to line 1, column 33: Column 'BILL_ID' not > found i > n any table (state=,code=0) ANSI standard dictates that any unquoted identifier (ie, bills in your case) is UPPERCASED and then evaluated. Postgres, which respects case, has the column defined "lower" case (ie, bills). You can change your query to: > select * from bills where "bill_id" = 1; Which will prevent LucidDB from uppercasing the identifier. I'd recommend any tables you create in LucidDB you should create with uppercase identifiers. Foreign data sources, such as Postgres, the case will come from the remote database so you'll just have to deal with whatever case the database had originally. Good luck, and let us know how you get on! Nick PS - You can also use this FAQ for some other common issues you may (or may not) encounter: http://pub.eigenbase.org/wiki/LucidDbUserFaq#Missing_Columns |
|
From: John S. <js...@gm...> - 2011-01-03 21:05:01
|
I had forgotten about the client memory setting issue...I've logged a bug since we should really fix this (and in general make it easier to configure the memory settings without having to edit scripts directly). http://issues.eigenbase.org/browse/LDB-234 JVS On Thu, Dec 30, 2010 at 6:51 PM, Jeremy Lemaire <je...@vo...> wrote: > > System Parameters > http://luciddb-users.1374590.n2.nabble.com/file/n5877674/system_parameters.txt > system_parameters.txt > > After seeing the jstack I am also leaning towards a problem with exhausted > free memory as opposed to my original concern that it was deadlock. Because > of this I have requested more RAM for this machine and also held off on > submitting anything to JIRA. if you disagree let me know and I will submit > the details we have discussed. > > As for the buffer pool, early on I tried several different settings. 4G for > Java Heap and 6G for the Buffer Pool seemed to work best at the time. My > theory for not making the min and max heap both 4G was that I would not be > able to run more than one instance of sqllineClient. Given that it is only > a 16G system and that lucidDbServer and sqllineClient share Java heap > settings as defined in the defineFarragoRuntime.sh script, it seemed better > to allow those clients that do not require 4G to use as little as 512M and > grow dynamically. However running as many as 5 (memory hungry) instances of > sqllineClient simultaneously, each of which being capable of consuming a max > of 4G of RAM, I can see how memory could quickly become an issue on a 16G > system. My understanding of Java heap however, is that the app will just > chew up swap once it runs out of free which could be why it appears to hang. > Maybe it is not hanging at all but instead just swapping like crazy and > going sloooow. Seemingly this would explain the analyze statements not > completing, but could it go slow enough not to service the socket > connections properly? I don't recall excessive swap but I will be sure to > check if this happens again. > > For now I have made a change to do all inserts in parallel and all analyzes > w/ ESTIMATE (not COMPUTE) serially and this appears to have worked around > the problem. Going forward I will try and get this going on a 32G machine > with version 0.9.3. Also within the next couple of months I should have a > Hadoop cluster in place to offload some of the computation and storage that > LucidDb is needlessly doing now and allow it to focus on OLAP jobs. I think > these changes will make my LucidDb setup much happier. > > Let me know if there is any other information you would like and if you > think a JIRA entry is still warranted. > > > > > > > -- > View this message in context: http://luciddb-users.1374590.n2.nabble.com/Connection-limit-or-something-else-tp3122544p5877674.html > Sent from the luciddb-users mailing list archive at Nabble.com. > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > luciddb-users mailing list > luc...@li... > https://lists.sourceforge.net/lists/listinfo/luciddb-users > |
|
From: John S. <js...@gm...> - 2011-01-01 06:38:22
|
On Thu, Dec 30, 2010 at 7:07 PM, Jeremy Lemaire <je...@vo...> wrote: > > With the new year fast approaching I am trying to update my views to include > the new 2011 partitions but when I try to drop the existing views I get an > error indicating that I need to cascade the delete. The problem is that I > have no idea what views, tables etc this cascade is going to effect. > > Is there a dry run setting or something similar that will show me what will > be removed if I cascade this delete? We don't yet have a DBA_DEPENDENCIES view, and the DROP command does not report the dependencies. However, we do have a secret weapon: a not-very-well-documented metadata query language called LURQL. We currently only use it internally, but there's a test UDX which allows you to access it. First, run these commands to register the UDX: create schema md; set schema 'md'; set path 'md'; create function lurql( server_name varchar(128), query varchar(65535) ) returns table( class_name varchar(128), obj_name varchar(128), mof_id varchar(128), obj_attrs varchar(65535) ) language java parameter style system defined java no sql external name 'class net.sf.farrago.test.LurqlQueryUdx.queryMedMdr'; Then, execute this to see the first level of dependencies: select class_name,obj_name from table(lurql(cast(null as varchar(128)), 'select c from class LocalView where name=''YOUR_VIEW_NAME'' then (follow origin end supplier then ( follow destination end client as c));' )); (Note that YOUR_VIEW_NAME is surrounded by pairs of single quotes, not double quotes.) To see all of the cascaded dependencies recursively, execute this: select class_name,obj_name from table(lurql(cast(null as varchar(128)), 'select c from class LocalView where name=''YOUR_VIEW_NAME'' then ( recursively ( follow origin end supplier then ( follow destination end client as c)));')); The results should look like this: +-------------+-----------+ | CLASS_NAME | OBJ_NAME | +-------------+-----------+ | LocalView | V3 | | LocalView | V2 | +-------------+-----------+ These assume that your view name is unique across schemas; if that's not the case, I can give you a longer query which deals with name qualification. JVS |