However, all the articles I've read talk about what it is, and how to use it.
What they don't mention is, WHEN do you use it.
For example, if I have an embarrassingly parallel problem and access to a cluster of computers, I can get my job done faster by using Hadoop, rather than doing a parallelizable computation on a single machine.
Can Apache Thrift make code more scaleable / faster? If I write my heavy lifting code in C/C++ and use Thrift to communicate it to my Java web server, will I see a significant gain in performance?
The original white paper (see http://thrift.apache.org/static/files/t ... 1465515663) suggests that Thrift can even cause performance losses:
We have found that the marginal performance cost in- curred by an extra layer of software abstraction is far eclipsed by the gains in developer efficiency and systems reliability.
So it appears that the aim of Thrift is not to provide performance gains, but rather a global interface to facility cross-language communication?