This video features Dave Lively of Cisco’s Intercloud business unit at Trove Day 2015, sharing his view of the public and private clouds options available today. He also shares that Cisco’s cloud model is based on OpenStack while using Trove for database as a service. You can view the slides here.

Transcript of Session: 

Ken:  I’m really excited to have our next speaker. I think everyone should get the opportunity to hear what he’s got to say. Dave Lively is responsible for a lot of the capabilities around what Cisco is doing with the intercloud. The initiative that they’re doing I think is really pretty exciting. I think when you look at the approach that Cisco’s taking to enabling cloud and the use of cloud throughout the world and how distributed it is, it’s really pretty amazing stuff. In particular, we’re really excited about kind of collaborating with them and seeing how Trove can fit into that environment and kind of seeing how we can work together to make that something that’s a really compelling value for all the people between Cisco, their partners, their customers and everything else.

With that I’d like to introduce Dave Lively, who is responsible for Cisco intercloud.

Dave:     Thanks.

Ken:       Thank you Dave.

Dave:   First I want to start by introducing myself. Again, Dave Lively. I run project management for the Cisco InterCloud services platform. I should probably start by talking about what intercloud means and what the platforms is. Why we’re doing some of the things that we’re doing. Why we headed down to OpenStack path. Why, as we headed down the OpenStack path, what we’re doing in database. Why we’re working with Tesora. Why we chose Trove and wanting to go down the Trove path as opposed to a lot of the other database paths that are out there. Then further some of the things that we’re looking for from the community’s perspective on sort of where to go next.

If we start with why are we doing intercloud and what is intercloud? It starts, at the high level as we work with businesses, enterprises, service providers, other developers; it’s all about digitization and digital transformation. Lots of new technologies coming on board. Everything these days is about speed, speed, speed. How fast can we go to leverage these new technologies, to take advantage of these new business paradigms, to look at these new business models? It ends up being that it’s not sort of just one cloud that is out there. It’s not just a single cloud vision. It’s not just, let’s go to public cloud or it’s not everything on private cloud, but really it’s about multiple clouds. That’s what the driver behind intercloud is at Cisco. There’s not one option. There’s not one path that customers are going down. There are lots of customers for a whole lot of reasons whether it’s because they have existing applications with Legacy data, or maybe there’s security or control or compliance or other reasons, they want to keep their applications and data on site. They want to keep them on site in a private cloud.

There are multiple different types of private clouds, whether you’re going down the VMware path or the OpenStack path or the Microsoft path, lots of different options on the private cloud side. Lots of different options on the provider side as well, for service providers that want to build clouds. Again, we’ve got people building stuff based around Baremetal. We’ve got people building stuff based on OpenStack. Other people building stuff based around Microsoft technologies or around VMware technologies. Lots of options on the service provider side, as well, in terms of how they’re building clouds.

There are a number of public clouds that are out there. Again, we’ve got Microsoft based, VMware based, Amazon based, OpenStack based. Lots of options on every single one of these areas and not a whole lot of interoperability between them. That’s one of the things that we’re trying to drive is to drive more common experiences across all of them.

Cisco, of course, has business in all these different environments. We sell a tremendous amount of infrastructure into the private cloud space. We do as well into the provider space as they want to build up their clouds. What’s sort of new and different and what we’re doing right now at Cisco is we’re not just in the business of selling infrastructure for other people to build clouds. We’re also building a cloud ourself. As we stepped back and looked at all the different things that Cisco is doing from a software perspective, from a “where do we go” software services transformation, every one of those software businesses was looking at how do I migrate to cloud. How do we leverage cloud? How do we move to more of a cloud delivered business model. Cisco wanted to take on that responsibility ourselves for our own products, our own platforms, our own services and build our own cloud.

The key here, however, is that it needed to be something that we could use across all of our businesses. We looked at cloud and we looked at a platform to use for our collaboration business, security, mobility, network management, a lot of different businesses that Cisco is doing. It had to be simple. It has to be developer centric. Everything needed to have an API because gone are the days when we had IT administrators that were coming in, logging in, doing stuff with the GUI, provisioning infrastructure and then handing it over to the dev team. Everything now is much more, I need my automation software spinning stuff up, replicating stuff, scaling it out, scaling it back, everything needed to be done via an API. Everything needed to be done simple. I needed to have the ability to sort of grow and shrink from a consumption perspective or demand perspective as my application needed to.

Since not every team could afford to build their own cloud inside of Cisco, that’s where our group came in terms of building that cloud for all of Cisco’s services and applications. The key here again, was all about the pace. It was all about how quickly the Cisco teams needed to be able to take and deliver that next collaboration application, or deliver that next security application, deliver that next set of mobility functionality, that next set of analytics and big data capabilities. The pace is extremely fast. We wanted to be able to take and have a platform. It took a lot of those common tasks. The things that every one of those development teams, because, that was our target, our target was the development teams, every one of those development teams needed to do, we wanted to be able to take that on ourselves.

It’s actually kind of funny, a few years ago, my last role I worked in the field, I ran a team of consulting engineers. We were helping to put together a training program for the broader field at Cisco. The broader set of systems engineers and help them to come up to speed on software and applications and databases, etcetera. We put together this sort of multi-week training program for them. Try to get a sort of CS undergrad degree and shove it into a couple of weeks, so we can sort of cram these engineers through some training. One of the classes was taught by a developer. He said a quote which I use all the time because I love, which is “the thing that us developers love about cloud is that it takes everything that used to be our problem and it makes it your problem” (from a cloud perspective). I no longer have to worry about scaling things because the cloud will do it for me. I no longer have to worry about more infrastructure; the cloud will do it for me. I don’t have to worry about replication; the cloud will do it for me. Everything is the cloud will do it for me. I say it a little bit tongue and check, but it’s kind of true in terms of you look at the things we’re doing in cloud, a lot of it is giver the developers the APIs and then we’ll abstract all of the hard stuff underneath away from them.

That’s what we want to do in cloud because when they’re just focused on APIs, they can focus on their application and focus on what differentiates them without having to worry about all of the infrastructure underneath. If every development team had to develop all that infrastructure underneath. There’s a lot of replication, a lot of wasted effort, a lot of wasted time.

We want to think continuous. It’s not just a one-time thing. This is an interesting sort of transition that Cisco’s going through .I’m sure any of you who are at somewhat of a large company are seeing this as well, which is a transition to, first, DevOps. I’ll be honest, it’s kind of a little bit of a foreign concept at Cisco is a lot of the application teams that we work with, what they’re used to doing is they’re used to building package software. Oftentimes it was packaged software that went on a package physical appliance and then you shipped it all to the customer. They can start to think of agile and sort of how do I move from developing in nine month cycles or whatever to developing more rapidly in two or three week sprints.

But DevOps was yet again something altogether different for them because now it got them thinking of not just how do I write and code my application, and then throw it over the wall to the ops guy because the ops guy is going to run it, but how do I develop my application. How do I develop it in such a way that I can automate the deployment of that application? How do I develop it in such a way that the monitoring of that application or service or feature is automated, that the alerts get generated automatically. How do I do my code development chains, so that I can put code into it? Develop the right branching techniques, etcetera so that I can upgrade components, but not everything. I don’t roll back. I’m always rolling forward to the next race. A lot of these concepts, hard for developers.

The other part that was really hard for developers at Cisco is start to get them to think in terms of micro services. We see those not just at Cisco but across the sort of market here is how do you get them to think about, instead of this one big application, start to break up that application into multiple small components. Enable each one of those components to be automatically deployed, automatically upgraded, automatically replicated, , scaled, etcetera as we’re looking at things such that I can look at different features and services on an individual bases. All about continuous deployment. It’s not just once every nine months or six months or three months, but much more in terms of continuous development. These are all the principles that we wanted to build into our cloud environment.

We based it off of OpenStack from a platform perspective, but everything that I just talked about or at least the majority of what I just talked about, actually has very little to do with OpenStack There’s a lot of tools that end up sitting around OpenStack from an automation perspective or just managing identity, managing monitoring, A lot of those tools and capabilities don’t necessarily exist within OpenStack. They certainly didn’t exist within the database community. A lot of what we were trying to focus on is not only how do we make it such that we can upgrade our platform quickly, so that we can continue to roll out new features, capabilities, and services to our customers very quickly, but how do we also enable them to use the tools on top of our platform so that they can develop and innovate quickly as well.

The Cisco Intercloud Services platform, it’s a global platform. We have regions worldwide, in Europe, in North America, in Asia. It’s all based on OpenStack. We have a few different platforms that we can choose from as we were looking to build this out when we started down the journey. A couple of years ago we chose OpenStack primarily because we’re looking for a platform that had open standards. Open standards in the cloud world meant looking at the open source community and OpenStack was the one that sort of had the most traction that we saw. Then, this should come as no surprise for one of the reasons for Cisco and OpenStack, is it was the only one that had networking op as one of the major core services within OpenStack. Obviously here at Cisco we believe that networking is critical to being able to have applications perform well in a cloud environment. A lot of focus on OpenStack.

Then as we look at that sort of multi-cloud strategy. Think back to intercloud whether it I s technology that is being deployed on-prem in our customer’s private cloud environment, whether it’s technology being implemented by service providers as they build up their clouds or whether it’s technology that we’re using to build our own cloud. OpenStack was a common technology that every one of these environments could use because when you look at the types of applications that we see people developing now, and we look at where the market is going now, especially in sort of the internet of things kind of space is, it’s not just about applications living in one place, but applications being distributed across multiple. I think collaboration is a great example or mobility is another good example of I may want some services of mobility running up in the cloud. But there’s going to be other data that I need to keep in country for sovereignty reasons or privacy reasons. Maybe there’s other data, maybe from a collaboration perspective, but for security reasons I need to keep on-prem. If a collaboration application needs to be able to run maybe in a public cloud, maybe in a partner’s service provider cloud, or maybe on-prem, the unifying technology that we’re able to look at across all of those was OpenStack, being able to leverage open standards. Key for us, open standards. Another reason why we chose Trove as we were marching down the database bath. I’ll talk to that a little bit more.

Some examples of some of the applications that are running on the platform today. This sort of shows the types of areas where we’re focusing. Cisco Spark, collaboration application. Energy wise is kind of an internet of things application, gathering a lot of energy data from devices that are out on-prem, etcetera, pulling that back, doing analytics, visualization on that. Mobility IQ, doing kind of the same thing from a mobility perspective. Gathering data from all these wireless access points and wireless controllers that are out there, whether it’s in stadiums or hotels or other places. Enabling people to sort of visualize what’s going on with their network environments.

A lot of these things are, let me gather a bunch of data. Let me pull that data back in. Let me store that data somewhere and let me start to do some analysis on it. The one thing that all these applications have in common is database needs. Every one of these today is doing their own database. They’re all bringing their own database. Some are using MySQL. Actually most of these guys are using PostgreSQL. Some are using some proprietary databases. Some are using Cassandra for some other types of data. All of these applications need database capabilities.

If you look at where these developers came from, they’re typically not DBAs. They’re not people who know how to install, maintain, configure and continue to run and keep healthy database applications over time. Because the typical model from a developer perspective in this space had been, I’m going to develop package software. That package software is going to need a database. Oh, you know what, most of my customers are using Oracle, so I’ll develop my application to work with Oracle. Customer brings and runs their own Oracle. Our application leverages that Oracle database and we’re able to get going. Now they have to provide their own database. They work in a cloud. They have to use their own database.

They are all sort of bringing them in today. Every one of them is asking for database capabilities. That’s why we started marching down the database as a service path. First thing we did, went out and talked to all of our different customers around what their database needs were. No surprise, they’re kind of across the board, but there were some sort of consistent themes that we did see. Some of them surprised us.

Let me do a quick poll here. Who thought the number one database request was going to be MySQL, raise your hand, if you think MySQL. We had a couple. How about PostgreSQL? Few more. Cassandra? Few more, Mongo? Anyone? No Mongo lovers in the house. Number one, by almost a ten to one margin and it surprised me was PostgreSQL. In hindsight, maybe it shouldn’t have surprised me because these are people who have in a lot of cases come from developing applications that may have been pointed towards Oracle and Postgres gave probably one of the easier paths in terms of moving off of an Oracle or using a lot of the same things that they were comfortable with in an Oracle world that use them in more of an open source and cloud environment. PostgreSQL was number one, MySQL was number two, Cassandra was number three, but they were all different. That was sort of the key. We can’t just live on one database, we need to be able to have multiple databases that are out there that we can leverage.

We don’t want the complexity. None of these development teams were database administrators. They didn’t want to be able to deal with that let me install it, let me configure it, let me upgrade it, let me patch it, let me configure, master slave it. There’s a bunch of complexity that they didn’t want to deal with, didn’t really know how to deal with to be honest. That quote back to the developer is, take everything that used to be my problem, and make it yours. They wanted to take all the things that were their problems and complexities around database and throw it on to us and make it our problem.

The last thing is feature sets. Remember every one of these guys is coming from an environment of I get to spec out my own specific database. I get to spec my own database, on my own custom hardware with a specific number of cores, and a specific amount of RAM. They got to spec out all of that in the past. They don’t get to spec that out in cloud. It’s trying to find a little bit of that sort of balancing point between full customization and dramatically reduced complexity. When we were looking at our database as a service model, it was how do we find that line.

On one side, you’ve got the full everything as a service which is great economics in cloud, great speed. I can just hit an API and it fires up a database. Great scale because I can make it bigger. On the other side you’ve got control and security of having your own database on-prem, in your own environment. You’ve got more design freedom in terms of specing out. The size of the database and the IOPS and everything else. What we really wanted to do was really try to find that blend and find that thing that sort of sits right in the middle. That gives the balance between enough control that they can spec out the database for their applications needs, but taking away as much of that complexity as possible to make it really easy for them to deploy, leverage and use database.

When we looked at the options to try to get there, option number one, is just build it ourselves. Let’s take and we’ll build our own sort of orchestration environment on top of existing database options. The whole database as a service framework we’ll just sort of custom code it ourselves. That’s what Amazon did with a lot of their databases is write it themselves. We’re not a company that does that. Database is not sort of one of Cisco’s core competencies. It’s not one of the things that we’re focused on driving and selling. For us database is more of an enabler. Database as a service in terms of write it all ourselves, not really an option, would of taken us to long and we didn’t really have sort of the skillset to really be able to pull that off.

On the other side, we had sort of full open source. Let’s just take what everything is doing with open source and OpenStack Trove. Let’s leverage the community software, the community code. A couple of good things here. Alliance with our commitment to OpenStack. We’re already building on top of OpenStack. Trove’s already got some of the capabilities that we’re looking for form and orchestration perspective. It’s multi-database, so it fits our needs to be able to spin up multiple databases for the different types of applications and services that our customers need. But not our core competency. Not something that we could pull off and do ourselves.

We picked the Trove side, but we wanted to pick it and do it with some help. That’s where as we looked around the market and we saw what Tesora is doing, we really wanted to work with Tesora. They’re absolute experts in driving this. Everything behind Trove, etcetera. Everything that we wanted to do, either Tesora could help us with or they could implement and add on top of. Then as they add things on top of it, it would drive that back into the community as well. It was a great alignment between the stuff that we wanted to drive and work with and the stuff that Tesora could bring to the table as well. It ended up being a really great partnership between the two.

Not everything is there, in terms of what we’re looking for now. One of the great things about working with Tesora, is we’re able to drive new features into Tesora’s product and then get those out into the community as well. This is all about driving open standards, driving new capabilities into the offer and getting those capabilities out into the community as well.

I’m not going to spend a huge amount of detail going through everything here as you can see a lot of what we’re doing is going to be ultimately be targeted at the M release for OpenStack. What’s the name again? Mitake for the M release. Tesora is probably first and then it’s going to be upstreamed into the Mitake release of OpenStack. If you look at the capabilities here, a lot of these are coming from our market focus and drive around hybrid and our market focus and drive around looking for capabilities that enterprises would need as they look to be able to leverage some on-prem capabilities, some off-prem capabilities and how we look at those in combination and Postgres a lot of our needs around Postgres. A lot of things in terms of replication and other capabilities that may have existed on MySQL, but hadn’t made it over to Postgres yet and being able to leverage that database. A lot of thing we’re working with Tesora on right now.

What’s next? Where are we going? What’s next? We want to continue to work with Tesora and the community to help make this better. If I look at the big use cases that we’re driving, I’ll start at the bottom there on hybrid. The big thing for us is they’re on hybrid. Back to intercloud, world of multiple cloud, applications running across multiple clouds. You’re going to have some of your data and application environments on-prem, some off-prem. They’re going to be in completely disparate environments. You’re not going to have them in the same OpenStack domain. They will be across domains. There’s more capabilities that we’re working on with the OpenStack community is you’ll be able to do multiple domains and be able to share things and look at regions etcetera across multiple different OpenStack domains, whether it’s on-prem private, public cloud, multiple different types of public cloud coming from multiple different types of providers.

We see a lot of the same things on the database as well as how do we help enable our enterprise customers as they’re moving their applications off their existing Legacy database environments and moving them into an environment that can be shared across multiple clouds, so they can have different portions of their application running across multiple clouds. How do we enable the replication, master, slave, redundancy etcetera, back up across those types of scenarios?

Second would be big data and analytics. I often talk about, it’s not just big data and analytics, it’s data and analytics. In data, you’ve got big data and you’ve got little data. Oftentimes when I look at the services and applications, certainly the ones that are coming out of Cisco, there are use cases on almost every single one of them. They’ve got big data and little data in the same applications. Enabling those to work together and being able to look at different types of databases that fit underneath the Trove umbrella as we start to work with different database companies that are specialized around IOT IOE which is that sort of third space.

One of the things is every one of these devices that are out here and every one of these sensors that is all over the place, they’re all generating data. They’re all sending that data back somewhere. One of the things we’re seeing in IOT IOE right now that there is much more requirement around a write capability as these massive amounts of these packets and little bits of data from sensors and monitors, etcetera are coming back in. How do we do tons of writes of that data? Then we’ll figure out how to analyze that data, or get that data into a stream that we can do stream based analytics off of that data.

A lot of data coming into that cloud and being able to do lots of writes, which is a little bit flip-flop from most databases which is you get the data in and then you’re doing lots of reads off of that data, it’s a little more flip flop in that I’m doing a lot more writes on top of that data.

These are the types of things that we’re looking at from a future’s perspective in terms of where we want to go. We want to drive it through the community. We’re not looking to have Cisco proprietary database, APIs, things like that. We really want to work with the community on this. We’re really happy that Tesora is able to work with us on driving this stuff. Sort of a good counter example, I think would be Sahara. We’re doing big data in Cisco in our cloud as well. On the big data side, we’re not looking to leverage Sahara up front. Would have been great to have a common standard API, an orchestration layer to be able to spin these things up, but we don’t see quite the community involvement from the big data vendors around Sahara that we do on the database side around Trove. There’s no good sort of Tesora equivalent on the big data side that’s helping to drive Sahara. It’s another place where it would have been nice to have a standard framework, standard APIs, standard ways to do these things across multiple distributions, etcetera, but we’re not quite seeing it play out in the exact same way on the big data side as we are in the database side.

I know I talked probably rather quickly here. I’ve got to try to get a bunch of stuff out. I’m in the coveted after lunch spot when everyone’s sort of coming back in and starting to maybe fall asleep a little bit from some of the great food that these guys have here. Big thanks to Tesora for arranging all of the food.

It wouldn’t be a conference like this if I didn’t make a plug that we’re actively hiring. I’ve got to throw that one out there. We’re actively hiring both engineers, developers, and product managers as well. If you’re looking or if you know somebody, reach out to me. I would love to hear from you in this space. Thanks very much guys. Enjoy the rest of your day.