The History of Databases

When I’m talking about database architecture and the benefits ScaleArc brings to data flow, I often find myself reflecting on the history of databases; how they got started, what they used to be, and why they exist in the shape they do today. For a timeline of the history of databases, read this article.

You may remember the introduction of FoxPro and Lotus 123 to the market back in the late 80s and early 90s. Those products were, at their essence, a way to manipulate data at a personal level. And they caught the imagination of developers around the world: If you can manipulate data at a personal level, you can do it on a much larger scale. But to do it on a larger scale for, say, an application, something much more elaborate was needed. That more elaborate “something” was a database. 

It began with a file-system-based database. This was simply a file that would be on the disc. There would be an application built that would access that file and read from it. That single application would get access to all the data that it wanted – to write whatever it wanted, read whatever it wanted. But that would limit the user, giving access to that file only through that one app.

If five servers needed to access that file, it couldn’t be done. There needed to be a way to lock that file and keep it from becoming corrupted if 5 users started accessing it at the same time.

That was the driving force behind the invention of network-based databases. That’s the basis for Oracle, SQL Server, and any other database that you see today. Modern databases take that file format and turn it into something which can actually be accessed over the network.

Accessing data over the network drove further evolutionary requirements. The moment data was made available on the network, there was a need to make it highly available. If a database failed and another machine that is trying to access that data, it wouldn’t be able to complete its task; so now you have the evolution of the backup machine: database redundancy – including the concept of the primary (or parent) and secondary (or slave, or child) databases.

So now we have this architecture of all these applications connecting to the primary database, and then… boom. The database fails and all those connections are lost. Some applications stumble, some crash, and some have to be completely restarted. Users lose transactions and the whole user experience is, simply, bad. With high availability built in to the architecture, there is at least a good chance that the applications can simply connect to the secondary. But often that transition is neither instant nor transparent to the user.

This is where ScaleArc comes in. ScaleArc is essentially a highly availablehighly scalable interface between the applications and the database servers. Applications connect to ScaleArc, not directly to the database. And if a particular database server dies for any reason, ScaleArc instantly and transparently queues all the writes, waiting for the sub-second that a secondary database server is promoted to the primary, and then sends the data to the new primary database, providing a seamless experience to the user.

In my next blog, I’ll go further up the evolutionary chain of databases and discuss cloud architecture, the benefits of moving data to the cloud, and the cloud’s not-so-silver lining.

comments powered by Disqus

Recent Blog Entries

  • November 15, 2017
    Helping Inmates Stay Connected to Family
    More »
  • October 12, 2017
    ScaleArc on Google – Hitting the Cloud Trifecta
    More »
  • September 19, 2017
    Acceleration Adoption of Azure SQL Database
    More »
  • September 7, 2017
    More ScaleArc Magic – Speeding up Apps with Wrapped Transactions
    More »
  • August 15, 2017
    Prepping for Black Friday? You’re Late!
    More »
View All Blog Posts »