So, what is an acceptable limit on entries (lenght of a Blog). from what I can tell, you have not limited the entries, so the database is going to limit it for you dependining on what column type you've declared for it. MySql text datetype is 64k, postrgre says unlimited...etc..
So, what's the longest blog post you've heard of?
FOr some reason Mr. Scott (the original guy who started the db2 patch) had declared a CLOB(32k). Now, that is an extremely interesting declaration. CLOB is a pain to work with in DB2, it's great for DB2, but bad for the SQL user. GROUP BY, ORDER BY, DISTINCT, UPPER, LOWER and many other filters all don't work on CLOB column.
THey do work on varchar columns though, of which the max length is 32k also, and everything works there.
See, Mr. Scott is probably alot smarter than me when it comes to DB2. CLOB can support up to 2 gb (or more I think on 64 bit systems). WHy he chose to limit it is interesting, and then if it's limited to 32k, why not use varchar to make life easier. Well, on non-64bit systems, you would have to declare a seperate tablespace and bufferpool to get 32k in a varchar. CLOB(32k) however, works. But so would CLOB(2gb). Declaring a tablespace and bufferpool for it is really no biggie though, lots of apps require that these days, even some other open source apps that work with DB2. Anyhow, I'm just not sure why he choose that. Seems odd.
From poking around, I kinda think 32k is not reallly enough. But, I'm not really sure. If you think 32k is enough for posts/comments and such, then life is very easy for me. Otherwise, there's alot of creativity involved. I know 99% of blog posts would not be even close to 32k, but, hey, wouldn't it be nice if s9y did support it?
Attempting to make Serendipity work with DB2 Database
-
garvinhicking
- Core Developer
- Posts: 30022
- Joined: Tue Sep 16, 2003 9:45 pm
- Location: Cologne, Germany
- Contact:
Hi!
That's true, 64k is the limit of an individual body/extended field, so an blog entry can only have a maximum amount of 128kb. Sadly I've heard of 2 or 3 people who reached that limit. Basically we could use MySQLs LONGTEXT datatype, but I don't know how pgsql/sqlite would handle that.
I believe the HTTP textarea element 64kb, so I think putting a value higher than 64kb would not make sense. Then again, I haven't really dug into this issue because a blog typically really is not meant for that large postings.
Personally I would currently opt for going the 32k way for DB2; if we want more, we would need to also look into getting more for all other DB backends, and that could be done at a later point?
Best regards,
Garvin
That's true, 64k is the limit of an individual body/extended field, so an blog entry can only have a maximum amount of 128kb. Sadly I've heard of 2 or 3 people who reached that limit. Basically we could use MySQLs LONGTEXT datatype, but I don't know how pgsql/sqlite would handle that.
I believe the HTTP textarea element 64kb, so I think putting a value higher than 64kb would not make sense. Then again, I haven't really dug into this issue because a blog typically really is not meant for that large postings.
Personally I would currently opt for going the 32k way for DB2; if we want more, we would need to also look into getting more for all other DB backends, and that could be done at a later point?
Best regards,
Garvin
# Garvin Hicking (s9y Developer)
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
That exactly what I did for now. (Remember the original patch had improved the db abstraction layer to dynmically fill in column types per database during the installation).garvinhicking wrote:Personally I would currently opt for going the 32k way for DB2;
I declared varchar(32k) instaed of clob(32k). But I'm laying in bed not sleeping, because I know there's a better way, (The general consensus, at at least among lots of IBM'ers, is DB2 is supposed to be better than those open source DB's
I think I'll ask some people what I can do. Mabey 32k is plenty though...
So, help me understand this.
The group by clauses always throw me off.
What is the intedned result set from this query:
SELECT c.categoryid,
c.category_name,
c.category_icon,
c.category_description,
c.authorid,
c.category_left,
c.category_right,
c.parentid,
a.username,
a.realname
FROM s9y_category AS c
LEFT OUTER JOIN s9y_authors AS a
ON c.authorid = a.authorid
LEFT OUTER JOIN s9y_authorgroups AS ag
ON ag.authorid = 1
LEFT OUTER JOIN s9y_access AS acl
ON (ag.groupid = acl.groupid AND acl.artifact_id = c.categoryid)
GROUP BY c.categoryid
ORDER BY category_name ASC
Or for postgre you do a select distinct(categoryid) instead of the group by.
DB2 allows niether. Sorry, I'm not the best as understanding other's SQL, I'm ok with creating my own, and, like I said, I usually steer clear of a group by, so I'm wonder if you can explain a little what you intend to receive here. There's a couple of this type of query, on categorys, entries...etc...
THanks
The group by clauses always throw me off.
What is the intedned result set from this query:
SELECT c.categoryid,
c.category_name,
c.category_icon,
c.category_description,
c.authorid,
c.category_left,
c.category_right,
c.parentid,
a.username,
a.realname
FROM s9y_category AS c
LEFT OUTER JOIN s9y_authors AS a
ON c.authorid = a.authorid
LEFT OUTER JOIN s9y_authorgroups AS ag
ON ag.authorid = 1
LEFT OUTER JOIN s9y_access AS acl
ON (ag.groupid = acl.groupid AND acl.artifact_id = c.categoryid)
GROUP BY c.categoryid
ORDER BY category_name ASC
Or for postgre you do a select distinct(categoryid) instead of the group by.
DB2 allows niether. Sorry, I'm not the best as understanding other's SQL, I'm ok with creating my own, and, like I said, I usually steer clear of a group by, so I'm wonder if you can explain a little what you intend to receive here. There's a couple of this type of query, on categorys, entries...etc...
THanks
-
garvinhicking
- Core Developer
- Posts: 30022
- Joined: Tue Sep 16, 2003 9:45 pm
- Location: Cologne, Germany
- Contact:
Hi!
The GROUP BY / DISTINCT takes care that when you have an entry that belongs to multiple categories, you won't have that entry displayed two times on the start page of your entries.
About trackbacks: www.s9y.org has documentation on trackbacks, and you can easily trackback your own articles to test it.
Sorry that I'm a bit short on words, but I have much stuff to sort out before my holidays *g*
Regards,
Garvin
The GROUP BY / DISTINCT takes care that when you have an entry that belongs to multiple categories, you won't have that entry displayed two times on the start page of your entries.
About trackbacks: www.s9y.org has documentation on trackbacks, and you can easily trackback your own articles to test it.
Sorry that I'm a bit short on words, but I have much stuff to sort out before my holidays *g*
Regards,
Garvin
# Garvin Hicking (s9y Developer)
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
# Did I help you? Consider making me happy: http://wishes.garv.in/
# or use my PayPal account "paypal {at} supergarv (dot) de"
# My "other" hobby: http://flickr.garv.in/
FYI, I'm moving along ok. I'm at a customer site this week, so I won't be doing much this week.
I've written a way to do a limit offset, but the way the query needs to be formatted is going to require some more changes to the other db driver files, else the code gets very messy.
Query needs to go somthing like this:
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key *ASC*) AS rownum,
columns
FROM tablename
) AS foo
WHERE rownum > skip AND rownum <= (n+skip)
The simple $limit added at the end doesn't cut it for this though, so how to best integrate?
Or, better performance wise, would be scrollable cursors, you can select anywhere from within the result set. HOwever, this requires the same connection handle to be passed from page to page, the connection left open. The handle would have to be tied to a session. Havn't looked at all how the code is handling sessions and such, so I don't know how feasible or desireable that is.
I've written a way to do a limit offset, but the way the query needs to be formatted is going to require some more changes to the other db driver files, else the code gets very messy.
Query needs to go somthing like this:
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key *ASC*) AS rownum,
columns
FROM tablename
) AS foo
WHERE rownum > skip AND rownum <= (n+skip)
The simple $limit added at the end doesn't cut it for this though, so how to best integrate?
Or, better performance wise, would be scrollable cursors, you can select anywhere from within the result set. HOwever, this requires the same connection handle to be passed from page to page, the connection left open. The handle would have to be tied to a session. Havn't looked at all how the code is handling sessions and such, so I don't know how feasible or desireable that is.