Once I got my way the largest tables had around 1 million records each and the whole database roughly three million records. The developers and testers considered going postal on me - after all these tests took longer and some of the software had to be rewritten - because the user interface was crawling, at the best. Everyone told me how stupid it was to fill a database with that many records.
Half a year into production, one database had more than 60,000 personal records. The largest table had more than five million records.
At the end of the 90’s I worked on another project. A web based user interface. A live system used from more then 30 different customers. A lot of real test data available. The biggest database had a few million records altogether. Many fields were blobs, normalization wasn’t really done. The whole programming team - including management - didn’t see the need for any testing on databases with more then a handful records.
When it works with one record, it will work with a few million records.
One of their websites regularly timed out - the SQL just took too much time. So what did the developer do? Takes his test database. Two records in the one table, twenty, thirty in another one. Plus a handful more in the other tables involved in a badly written join. SQL takes a few seconds to run. Then he comes to me: “I don’t understand. My computer is slower then the server, yet the result page shows up in seconds!”
Well, duh! The customer database had a few hundred thousand records in the tables which got joined.
Yet until today that development team still insists that testing with the smallest database possible is all what it takes.
How many test data do you generate?
Tags: 1 million blobs customer database databases developers development team duh handful hundred thousand join sql million records normalization personal records programming team sql test data test database testing user interface web based yeah right