![]() |
When does a MySQL table become too large?
Does a large database have the most impact on page load times or cpu load?
I have a table with about 1,000,000 entries so far, every time an image on the page is clicked it inserts an entry, and on every page load it groups the table by 'img' and then sorts by the amount of entries (or clicks) for each img, then displays them in order on the page, about 50 thumbs. I'm mostly concerned about how long it takes the page to load, does sorting this big of a table make the page load a lot slower or is it mostly unnoticeable? The table (size is about 70mb): ( `id` int(11) NOT NULL auto_increment, `time` int(11) NOT NULL, `img` varchar(255) NOT NULL, `country` varchar(30) NOT NULL, `c_ip` varchar(15) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=INNODB DEFAULT CHARSET=latin1; |
store country as chr(2) and ip as an int, will make the table way smaller... and if you are grouping by img, probably create an index on it...
:2 cents: |
Quote:
|
Quote:
|
Quote:
What woj said, but 2 main concerns - 1. if you're grouping by image, the img really should be img_id - an int(x) where x is large enough for all your images (ie 4 if you have <9999 images). Far far faster to group an integer index than a varchar one, since the index will be numeric and a shit load smaller. 2. If this is the table that is getting hit with the inserts when an image is viewed, you really should make it an InnoDB table, not myisam. |
bad db design
|
MySQL...
:throwup |
Quote:
|
Quote:
:pimp |
Quote:
|
DB reads, if the indexes and layout is decent, should be no problem - it's presumably the insertions that's killing your server's performance. Batch processing is likely the better way to do what you're seeking ...
More specifically, log the information to an append file with one entry per line (akin to a log file). Then every hour or whatever (imho, just once a day would likely be sufficient for what you're using that data for), run a cron job to processes that file into another file that contains the aggregate data for each image clicked - then add those click totals / additional data (ie. if also tracking IPs and/or countries, etc.) to the respective image totals / data already in the database. Ron |
Only 70mb, might be a candidate for a well-managed memory table solution. You might also look at using INSERT DELAYED if you aren't.
|
why would you need to insert a row every click. inc a number on one row
if you're doing logging, then log it to a database thats not used to sort the pics .... |
Quote:
|
If you search by indexes then the average time complexity is
O(log(tablesize)) which means you could have 1,000,000,000,000 rows and it would only take twice as long to search as 1,000,000 rows |
normalization needed.
|
Change the script so it actually build a page rather than making it on the fly.
Log clicks and impressions in a memory table and parse the clicks / impressions every few minutes. The more load you remove from mysql the more complex sorting you'll be able to do (more popular pix per referring domain, most productive design per domain etc) |
:1orglaugh
|
Quote:
|
im going through my old posts and reading up. sorry to necropost but really....
a million rows is way too much just to sort 50 thumbnails. It doesnt even matter if your server can handle it, its a complete and total waste. you create a table with all the ips, you parse that table into a smaller table which the website will read from quickly. you purge that database as often as you wish it to be purged. the website would read from a simple table that has the number of unique clicks the image has. simple |
All times are GMT -7. The time now is 07:09 PM. |
Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2025, vBulletin Solutions, Inc.
©2000-, AI Media Network Inc123