In database design classes in Computer Science, we learn that normalization is a good thing. And it certainly is a good thing, for databases. In
the case of APIs, it is a different story. If a client must do multiple
GETs to obtain the data it needs, or multiple PUTs or POSTs to send up
data, just because your database happens to be normalized, then
something is wrong.
One of the functions of an API Server is to de-normalize your data so that clients are spared from making extra REST API calls, with all of the overhead which goes with that. Mugunth Kumar explains this very well in this excellent presentation, using Twitter as an example. When you do a GET on a tweet, it not only returns you the Tweet itself, but also other information (e.g. description of the Twitter user who sent the tweet). This saves the API client (often a mobile app) from making another request for that data. Effectively, the API Server has gathered up that data, which may come from different database tables, and de-normalized it for the response. You can try it out yourself here, by looking at the JSON which comes back from this Twitter API GET the most recent Tweet from my timeline.
Many Vordel customers are using the API Server to gather together the data which is returned to the API clients, often taking this data from multiple sources (not only databases, but also message queues and even from other APIs). This data is then amalgamated into single JSON or XML structures. It often then cached at the API Server, in this structure. In this way, clients are spared from doing multiple calls, and instead (like the Twitter API example above) get the data they need in one request, or can PUT or POST up data in one action, rather than piecemeal. De-normalization is key to this process, and is one of the great benefits of an API Server.