I am mainly a programmer with a 'reasonable' understanding of SQL server. On my previous projects a single box with a single SQL install has been more than adequate for my needs.
My current project is technically quite simple but holds a lot of data, about 6 million records in one of the main tables. Some of the stored procedures are pretty complicated and even after the tables have been indexed, are still
taking 30 to 180 seconds to complete and a single call to the stored procedure can have the processors running at about 80%. This will be a multi user system, so I forsee problems if these 'hard' queries are hit concurrently.
I have some thoughts on how to restructure the data to make the queries more efficient, but I am wondering what the next step is from a hardware/software perspective.
Can I seemlessly add a another physical server with another install of SQL server and help spread the load, so the Stored procedure is called at one point, but it spreads the processing over different physical servers, will it manage it its self, do
I have to do anything, can I store data over multiple servers, as it is all stored on a single 130GB(ish) raid array, but could grow quite a bit more. These are the types of questions I am asking myself.
How do big systems cope, how do you keep the query times
View Complete Post