Simplifying and automating provisioning can speed IT
Within IT, automating repetitive system-administrative manual procedures is an increasingly important part of speeding up develop/test/deploy cycles, while also helping IT developers and operators focus on more valuable tasks.
One area that’s been lagging in this regard has been database management systems (DBMSs). While tools like Ansible, Chef and Puppet help script DBMS provisioning, they don’t address the rest of a DBMS instance’s lifecycle, like performance tuning and monitoring. Providers like Amazon Web Services have established basic customer expectations for provisioning databases – Database as a Service (DBaaS), including automating processes, offering “self-service” for developers, and supporting scripting and automating of operations for the infrastructure.
However, this isn’t sufficient to support a modern DevOps methodology when it comes to provisioning and managing DBMSs in today’s cloud-native environments.
In classic IT, where a DBMS instance, once created and spun up, is likely to run for years, this isn’t as much of a concern.
But in today’s cloud-native architecture, where instances of virtual machines and containers, and the operating systems, application stacks, and applications within them, may be coming and going like popcorn kernels – spinning up when a relevant microservice or demand surge requires them, going away when the need is gone.
Also, where classic IT tended to standardize on one, two or maybe three databases throughout the enterprise, today’s cloud developers have access to dozens of different purpose-built databases, like MySQL, Cassandra, Couchbase, CouchDB, MongoDB, PostgreSQL, and Redis — where the DBMS’ features are a better fit than those of traditional commercial RDBMS’ like Oracle or SQL Server.
Using many different databases makes sense – the right tool for each task.
But it’s essential that:
- These possibly-many-different databases, and their configurations, be available across development, test, and operational environments
- New, already-configured instances can be spun up quickly, both manually and automatically, on demand as needed for development, testing, and to scale production capacity
- Management of these DBMSs and their instances isn’t just provisioning, but also extends to performance tuning and other monitoring throughout a DBMS instance’s lifecycle.
For the company as a whole, DevOps methods and tools, particularly combined with Agile methodologies, can result in being able to improve, change and add business processes far quicker and more flexibly, which can translate to being more competitive and productive. As the range of DBMS uses for applications grows and changes, it’s increasingly important that DBMSs, along with virtual machines, containers, operating systems, and other components of an application “stack” be brought under a DevOps methodology.
And to do that, you need tools that can wrangle heterogeneous database management systems, configurations and instances, throughout each one’s lifecycle.
A Database as a Service (DBaaS) approach (and software that does this) allows IT, developers and DevOps to administer a wide range of database technologies using a single, common management infrastructure. The result is that routine tasks like provisioning, and managing regular administrative tasks like clustering, replication, backup and restore are handled in a simple, unified way. Without DevOps tools, managing a mix of databases can quickly soak up precious system administrator time and attention… and speedbump development, testing and operations.
Within OpenStack environments, OpenStack Trove is rapidly becoming the solution for provisioning and managing all of the relational and non-relational database resources.
If your business is creating and using cloud-based applications, the odds are good that a) your IT is already using some DevOps methods and tools, and b) they’ve got databases in the mix. So bringing the DBMSs under your DevOps umbrella shouldn’t be a stretch – and should quickly pay off.