I'm working on a tracking system for a project that requires a heavy insert volume on a larger table - around 200 inserts/sec with table adding around 5million rows a day with a single index (primary key on autoincremented id column). The structure of the row is: int(primary/autoincrement),vc(32),vc(128),vc(64),vc(64),vc(16),vc(16),int,vc(254),vc(96),vc(254),timestamp,int,timestamp,int.
I've always used Mysql for this kind of stuff, but my server is starting to buckle (load jumps above 1 and queries start to lag), even after tossing in various 'hacks' to speed things up - cleansing the main tracking table to an archive table every hour, maxing out mysql conf, using partitions and insert delays, etc. I've also confirmed that mysql is the bottleneck and not httpd/nginx as the server does fine when i remove the tracking calls. Also, I think the server should be able to handle this at 12 core intel, 64 gigs ram, ssd hd. I'd ideally like to keep all of this to a single server if possible, as well as remove the archiving aspect of it since its making readback of stats kind of a pain and not 'realtime.' Also, the id of the row inserted (via mysql_insert_id) is used later on so I need that intact.
Anyways, is mysql capable of handling this volume given my constraints and server or are there better solutions out there to explore? I have no problem upgrading the server as well if that will help or working on more optimization, but I've kind of maxed out what I know about db optimizing/scaling. Wanted to see if anyone on here has any thoughts/suggestions before I go down the stackoverflow rabbit hole even further. Thanks in advance.
I've always used Mysql for this kind of stuff, but my server is starting to buckle (load jumps above 1 and queries start to lag), even after tossing in various 'hacks' to speed things up - cleansing the main tracking table to an archive table every hour, maxing out mysql conf, using partitions and insert delays, etc. I've also confirmed that mysql is the bottleneck and not httpd/nginx as the server does fine when i remove the tracking calls. Also, I think the server should be able to handle this at 12 core intel, 64 gigs ram, ssd hd. I'd ideally like to keep all of this to a single server if possible, as well as remove the archiving aspect of it since its making readback of stats kind of a pain and not 'realtime.' Also, the id of the row inserted (via mysql_insert_id) is used later on so I need that intact.
Anyways, is mysql capable of handling this volume given my constraints and server or are there better solutions out there to explore? I have no problem upgrading the server as well if that will help or working on more optimization, but I've kind of maxed out what I know about db optimizing/scaling. Wanted to see if anyone on here has any thoughts/suggestions before I go down the stackoverflow rabbit hole even further. Thanks in advance.