I'm building a site with php and a mysql database.
I already know that the site after a while wil grow to a loth of information.
Now how big may the database be, so it will not lose any speed and reliability.
Is the also a posibility that I lose information on any way?
In case this is posibel what do I use than bether?
How large a table can be depends on the OS being used:
Linux 2.2-Intel 32-bit - 2GB (LFS: 4GB)
Linux 2.4+ (using ext3 filesystem) - 4TB
Linux 2.4+ 64-bit (using XFS filesystem) - approx 8m TB
Solaris 9/10 - 16TB
NetWare w/NSS filesystem - 8TB
Win32 w/ FAT/FAT32 - 2GB/4GB
Win32 w/ NTFS - 2TB (possibly larger)
MacOS X w/ HFS+ - 2TB
As you can see, size is something you shouldn't worry about. This is just per table, so it can grow even larger. Only if you are running a major forum you something might exceed the limit if the server is running on an old linux version or windows with fat(32).
As for speed, mwuah. Site contents itself in general won't reach a stage in which optimization will truly be needed. If your database would manage to become 100mb by just the content of the site it still won't have a drastic effect on speed.
Once you hit 2GB or so you will start noting the difference between optimized and non-optimized. Especially once databases become over 10-20GB you will see a drastic change in speed, you'll also want to prefer just archiving old stuff then instead of keeping it along with the active database. This is also something you can see on major forums and all, where old posts are just being archived instead.
You can lose information yes, therefoer you should always remember to make backups from time to time. You could even set up an automated system for this so a backup would get mailed to you daily in example, kinda depending on how big it is. Mails of 200MB will no longer be very effective.
There are alternatives, but all have their pro's and their con's. In general MySQL will be good enough.
The hardest hit in speed is the number of queries you do. For example the site im working with at the moment had one page with 4,000 queries on one particular page. And a standard pageload of 60 ish minimum, with this number of queries hitting the database every page load from 120 minimum active members at any particular time it starts to lag a bit.
ok, this has peaked my curiosity
first off...4,000 queries? seriously? what exactly are you up too....
secondly...i'd like to see that source code.
thirdly...depending on the use, it may be more efficient to download the whole table. that way php can parse through the data and eliminate the need for php to communicate with the database.
BearClaw: I don't think so, it's nearly always more effecient to make the database to it. If the whole table is requested it all has to be moved into ram and then php has to grind through it. The database often uses tricks to be faster, that's one of the points of the database. It makes stuff optimised and does this transparently.
Anyway, it think the time a query on table takes grows logrythmically to the size... or something.
Grimboy, normally I would agree. But the gross amount of queries made, that means that php has to query the database 4k times and the database has to return the data 4k times essentially doubling the number of processes. when it comes to something that large you'd prolly have to try it both ways and see what the different processing times are.