Likely this is the wrong question, so my response is just a guess... from doing software development since 1982...
Best to tool your infrastructure, so no load balancers are required.
If you design your code where your most accessed data remains memory resident + moving data from persistent storage (disk) into memory is fast, you'll have no requirement for load balancing.
You give no information about the size of payload (in bytes) which will be requested, so no way to guess sizing of your net connection.
Also you say 1 million status calls + give no time frame over which these calls must be processed.
Based on your question, likely your best starting point will be to hire a seasoned developer to design your system to maximize memory resident data.
Also, using webhooks may be a poor choice, as this makes code extremely complex, so only a few developers will be able to maintain + extend your code, so you'll pay more for development + have difficulty finding developers.
Stick with a LAMP stack allowing Apache to manage threads, rather than Webhooks + your life will be easier + your budget lower.
I'm assuming that you are sending mail through SparkPost and you're accepting email transmission events via webhook from SparkPost.
The simplest way to implement and scale this would be to use Amazon's AWS API gateway connected to their AWS Lambda event-driven architecture. The architecture would look like:
SparkPost -> AWS API gateway -> AWS lambda -> external services
You can define webhook API's in the AWS API gateway very easily, with a GUI interface. The AWS lambda event driven code can be written in NodeJS (JavaScript).
This will scale to arbitrary levels of events without requiring you to invest in a lot of infrastructure.
See these examples:
https://developers.exlibrisgroup.com/blog/Hosting-a-Webhook-Listener-in-AWS
http://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html