Achieve Universal Data Access With FIX
How do you collaborate on petabytes of data with performance that makes it look local? Tune in to hear Faction CTO Matt Wallace explain FIX, Faction’s data fabric that provides a single IP and single namespace across all clouds and on-premises locations.
Hey, how’s it going? It’s Keith Townsend and principal of the CTO advisor and you’re watching what is in person, an amazing scene here in what I want to say, central Denver, Lord of the center dip. I’m not sure it’s just beautiful. I have with me the CTO of Faction, Matt Wallace. Matt, welcome back to the program.
Thanks for having me.
So, Matt, in the previous conversation, where we talked about use cases for multi-cloud, you hit it to this genome use case. And this is something that’s near and dear to my heart as someone who worked in biopharma for years, one of the challenges that I experienced practically, is as the price of mapping the human genome decreases the amount of data that results in increases.
Multi-Cloud Enables Collaboration Across Multiple Clouds
The more data, the more scientists and organizations want to collaborate on that data, which caused a physical problem for us. Petabytes of data that we needed to collaborate on, across multiple service providers, multiple partners, multiple clouds, or multi-cloud and Faction solves that problem. How?
You know, I like to call us the cloud between the clouds, and it’s because we have this multi-cloud data service as like a core product, something that can hold those petabytes of data in one location with one copy, but make it available in each cloud simultaneously. But with a performance that makes it look like it’s local, that is some really, you know, power for unlocking some of those use cases, right?
It lets you do things you wouldn’t be able to do otherwise scale across multiple clouds, collaborate across multiple clouds even use services from multiple clouds. You want to analyze your data in AWS, but then visualize it in Power BI and Azure, you can do that.
But really, it’s about creating this whole platform between the clouds, that allows you to build anything that you need that can stitch these things together. Data is the center of that universe. But it’s a core capability, but also not the only thing that we do between the clouds.
The data was generated on-premises. Through data collection. And we took the individual genome sequences uploaded the data to a centralized repository. Now we want to action that data via public cloud resources. I don’t have 300 GPUs on-prem. However, you have a thing on your back that says defy gravity.
Defy data gravity.
The Faction Internetwork Exchange (FIX)
Defy data gravity. There’s something that I’ve just repeatedly over my career cannot defy, which is the GPUs are in Azure, or Google or Amazon, the data is not there. How are you guys solving that problem? Because I need low latency and high bandwidth.
Yeah. You know, Faction has a whole bunch of patents. And they all tie into this idea of tying resources into multiple clouds with deep network isolation. So we can actually, you know, deep in the Faction DNA is this isolation, security, performance, throughput. And that network fabric is at the core of everything we do, we call it the FIX. The Faction Internetwork Exchange.
And you can think of it as a fabric that can be divided into virtual sub fabrics, almost like at AWS VPC, you know, think about it that way, maybe from a security and isolation standpoint. But then what we do is we take those data services and anything else, people turn up even things like appliance virtual machines, for example, and stitch them into multiple clouds through this fabric. But it’s not like what people are used to with a typical cloud exchange, where you’re just tying a layer to VLAN in and it’s just in isolation.
We’re truly taking these environments and providing the whole thing, you know, the managed service that ties it in, the BGP routing, data services for QoS, all these types of things that are kind of necessary to make it holistic, right? Because it’s one thing to say, oh, I can attach a storage array to a cloud, maybe even a couple of clouds, or oh, I’ve got access to this thing.
And the service lets me touch three clouds but integrating a super high-performance multi-petabyte data service with a multi-cloud fabric to provide all these different aspects of security, isolation, compliance, which as you know, from your past, it’s like a huge deal. If you’re doing life sciences data, you can’t go and leak people’s data, you have a ton of audits to pass, and integrating all these things is just part of the challenge. But for us, it’s just part of the service that we provide.
Multi-Cloud Data Fabric
So what I’m hearing you say is that you basically built and, again, you’ve used these words, but I want to emphasize it a Data Fabric to handle the data transfer the ability to transfer data to and from AWS, and Azure, and Google, and the major cloud providers.
When I think about that, I think about these cloud wholesalers, and when I can go in and say, You know what, I need 40 gigabytes of overall space. Today, I want two gigabits of that bandwidth to go to AWS, three gigabits to go to Oracle, and four gigabits to go to some other cloud provider, or even back to my data center. That’s a lot of engineering.
It is a lot of engineering, certainly, you know, the automation is a big part of it, right? When you have a team, that’s engineering cloud applications, they want, you know, the thing that’s enabling multi-cloud to work as a cloud service, right?
It needs to be automated, it needs to have self-service, it needs to have APIs, right? So we certainly provide that. And I mean, you got to think ahead to, you know, a DIY solution or a set of one-dimensional solutions is not going to give you the ability to go from five gigabits to 50 gigabits in the span of 15 minutes or less, right. And that, I think, is a necessary part of the idea of scaling across multiple clouds.
What good is it if I say, Hey, you can take 100 GPs here and shift them over here, because the spot instances are cheaper, or because the capacity opens up if you can’t also shift the throughput that you need to go along with it? And so that’s definitely a key part of it. But the other key part too, is, you know, I can tell you from doing this, in the real world, that this idea of a data service that’s attached to multiple clouds is like 90% of the problem of dealing with data gravity, but it’s not all of it.
A great example is, I have a customer who has a very complex Azure networking environment, right? Tons of Hub and Spoke topologies, lots of virtual network peering, and they use a particular vendor’s appliance to help manage that, from a software-defined networking perspective, we are able to actually take that appliance, they are actually able to deploy an endpoint for that into our multi-cloud platform as well, to extend their own software-defined networking construct into our data service. No one else can do that.
Work with a Multi-Cloud Expert
So Matt, you just opened up a bag of questions for me, like, Santa’s bag just got opened for me. I have a bunch of questions around latency. How do I select the correct site? How do I know from an application perspective how to select what data sets? I’m thinking about visibility into health and, and performance. Like there’s 1000 architect-level questions.
I’m glad you say that because this matters, right? And one of the things that I’m always saying is people sometimes start with this idea of what it takes to do this. And I actually have somewhere this laundry list because it is literally there are 100 things you have to worry about to do this right.
So if you want to learn about how to do this right in Matt’s words, I suggest you reach out to Faction Inc. Thank you for supporting the CTO advisor and sponsoring this content. If you want to learn more about the CTO advisor, you can follow me on the web. Twitter DM me if we didn’t cover a topic. I’m certain we didn’t cover all topics. It’s way too expansive of a conversation. Matt, thanks again for joining us. Talk to you on the next CTO dose.