We recently need to solve this very interesting problem: how to effectively aggregate all the sales data across the country.
Sales data: basically, we want to know something like who buy what on when and from which sellers
Effectively: we need to get those data as fast as possible and as accurate as possible
We established some system for that in the past and it did the job very well. However, it couldn't scale as our business grows. Therefore we looked for the alternative.
The first alternative looked very promising. It's almost 100% automatic. The vendor promised us something like we just need to sit there and their software (which need to be installed in each of our 50+ servers across the country) will automatically push all those sales data back to the central server. We just need to get what we want from that central server.
We did get the data. However it usually misses this bit or that bit. And some people complained that the "pushing" software makes their system too slow. After a while, people started to lose interest in it and the missing data occurs more and more often. Finally we decided to call it a day and look for the next alternative.
The 1st lesson: cool idea + bad execution = failure. From the technical standpoint, the 1st solution is the right way to go but it fails mostly because of the way we deploy it.
After a while, we came up with the another approach bearing in mind the following points
For the solution to work, we need something which is comfortable for us and also for the one who works with us. If it's not comfortable for them, show them their benefits
The solution should give us more control in the whole system
If needed, we can even get the data by hand, as long as it's fast (enough for us) and it's accurate (enough for us)
Again, it's not always about the cutting-edge technology, it's more about the right technology + the right process + the right way to deploy all of them
More details on that approach (which seems to solve our problem) will be there in part 2 of this note