AWS EC2: P2 vs P3 instances

Amazon announced its latest generation of general-purpose GPU instances (P3) the other day, almost exactly a year after the launch of its first general-purpose GPU offering (P2).  While the CPU’s on both suites of instance types are similar (both Intel Broadwell Xeon’s), the GPU’s definitely improved.  Note that the P2/P3 instance types are well suited for tasks that have heavy computation needs (Machine Learning, Computational Finance, etc) and that AWS does provide G3 and EG1 instances specifically for graphic intensive applications.

The P2’s sport NVIDIA GK210 GPU’s whereas the P3’s run NVIDIA Tesla V100’s.  Without digging too deep into the GPU internals, the Tesla V100’s are a huge leap forward in design and specifically target the needs of those running computationally intensive machine learning operations.  Tesla V100’s tout “Tensor cores” which increase the performance of floating point computations and the larger of the P3 instance types support NVIDIA’s “NVLINK”, which allow multiple GPU’s to share intermediate results at high speeds.

While the P3’s are more expensive than the P2’s, they fill in the large gaps in on-demand pricing that existed when just the P2’s were available.  That said, if you’re running a ton of heavy GPU computation through EC2, you might find the P3’s that offer NVLink a better fit, and picking them up off the spot market might make a lot of sense (they’re quite expensive).  Here’s what the pricing landscape looks like now, with the older generation in yellow and latest in green:

When the P2’s first came out, Iraklis Mathiopoulos had a great blog post where he ran Hashcat (a popular password “recovery” tool) with GPU support against the largest instance size available… the p2.16xlarge.  Just a few days ago he repeated the test against the largest of the P3 instances, the p3.16xlarge.  If you’ve ever played around with Hashcat on your local machine, you’ll quickly realize how insanely fast one p3.16xlarge can compute.  Iraklis’ test on the p2.16xlarge cranked out 12,275.6 MH/s (million hashes per second) while the p3.16xlarge at 59,971.8 MH/s against SHA-256.  The author’s late 2013 MBP clocks in at a whopping 121.7 MH/s.  The p3.16xlarge instance type is about to get some heavy usage by AWS customers who are concerned with results rather than price.

Of course, the test above is elementary and doesn’t exactly show the benefits on the NVIDIA Tesla V100 vs the NVIDIA GK210 in regard to ML/AI and neural network operations.  We’re currently testing different GPU’s in our Worker product and hope to have some benchmarks we can soon share based on real customer workloads in the ML/AI space.  The performance metrics and graphs that Worker produces will give a great visual on model building/teaching, and we’re excited to share our recent work with our current ML/AI customers.

While most of our ML/AI customers are on-premise, we’ll soon be looking to demonstrate Iron’s integration with P2 and P3 instances for GPU compute in public forums. In the meantime, if you are considering on-premise or hybrid solutions for ML/AI tasks, or looking to integrate the power of GPU compute, reach out and we’d be happy to help find an optimal strategy based on your needs.

GPU support in IronWorker

In the past few months, we’ve spoken to quite a few customer that have added Machine Learning (ML) tasks into IronWorker. The problem is, these tasks can take a significant amount of time on a CPU vs a GPU. GPU’s were built to handle the parallelization of complex matrix/vector operations that gaming required, and it so happens that deep learning exercises also have similar requirements.

That said, we thought it was about time to add GPU support to IronWorker. We started with a simple test of doing image recognition via TensorFlow. After hacking the example python script to add the ability to download an image via a URL, we zipped up the script and uploaded it to IronWorker. We went ahead and used the latest Tensorflow docker image.

> zip classify_image.zip classify_image.py
> iron worker upload --zip classify_image.zip --name classify_image tensorflow/tensorflow:latest-gpu python classify_image.py --image_url "https://www.petfinder.com/wp-content/uploads/2012/11/91615172-find-a-lump-on-cats-skin-632x475.jpg"

In this initial push, we released support for the g2, g3 and p2 GPU instances on AWS. Once we fired off that task, here’s what things look like on our end:

It took about 10 seconds of run time for the image classification via IronWorker using a p2.xlarge instance on AWS. We didn’t have a chance to run this against a non-GPU instance, but we’ll leave that as an exercise for the reader. We’re pretty sure it will take a little longer than 10 seconds! The actual output from the script is as follows:


Found device 0 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID 0000:00:1e.0
Total memory: 11.17GiB
Free memory: 11.10GiB

Egyptian cat (score = 0.60871)
tabby, tabby cat (score = 0.12714)
lynx, catamount (score = 0.07766)
tiger cat (score = 0.07641)
cougar, puma, catamount, mountain lion, painter, panther, Felis concolor (score = 0.00148)

We’ll be working closely with a few customers in the coming months on some large ML/AI projects, and we’ll post as much as we can on their use cases and resulting benchmarks. As ML becomes more and more prominent in Business Intelligence operations, we’re expecting to see a big increase in GPU usage. If you have any questions about our GPU support, drop us a line and we’d be happy to chat.

Robotics with Iron

We recently sponsored a robotics event in Tokyo held by OnLab, DigitalGarage and Psygig… and, it was awesome.  Also, yes… the course below was an actual course from the event.

The participants were to break into teams, build a drone, implement machine learning techniques, gather and analyze data via Iron, and maneuver their drone across multiple courses.  The teams that finished the courses and displayed the most innovative technical solutions were crowned champion.

The ability to utilize GPU instances and fire up containers that run libraries like Keras and TensorFlow allows for the offloading of heavy computational workloads even in highly dynamic environments.  In the last few months, we’ve been speaking to more and more customers who are using Iron for large ML and AI workloads, often breaking them into distinct types of work units that require different levels of GPU, CPU and memory requirements.

Congratulations to the winners of the contest and all those that participated!  It looked incredibly challenging.  If you have any questions about utilizing GPU’s, machine learning, artificial intelligence, or any other computational heavy lifting jobs using Iron, feel free to contact us at support@iron.io as we’ll be happy to chat.

Full Circle and Ramping up at Iron.io

Iron.io was recently acquired by Xenon Ventures, a private equity, and venture capital firm. Xenon Ventures is headed by Jonathan Siegel, a serial entrepreneur who has founded many popular software services and has made just as many successful acquisitions.

Here comes the full circle. What may not know, is that Jonathan was Iron.io’s first customer and investor back in 2010 prior to Iron.io’s creation. Jonathan was client and friend of the founders’ consulting business prior to Iron.io and encouraged the founders to transform their consulting service into a product.

In 2011 the first version of IronWorker was launched and the serverless revolution began. After pioneering this space, Iron.io has grown significantly adding products like IronMQ, IronCache, and our latest development of our Open Source product, IronFunctions. This success is due all of our amazing customers and partners! Thank You!!

New and old faces

You may be seeing a new name fly around a bit as well.  I’ll (Dylan Stamat) be joining Iron.io as General Manager and moving Iron forward. A little about me: I’ve been a personal friend to the founders, I was a co-founder at RightSignature (https://rightsignature.com), founded one of the first HIPAA compliant companies to run on AWS (https://verticalchange.com), was previously the CTO at a large technology consultancy company (ELC Technologies), a Ruby on Rails contributor, a committer to Ehcache, and a big fan of Erlang and Golang.

You will also see and hear from many other familiar faces at Iron.io – Roman Kononov, Director of Engineering who has been leading our engineering office in Bishkek, Kyrgyzstan since 2011. Nikesh Shah, Director of Business Development and Marketing, and other various new and old members from our globally distributed teams.

Roman Kononov and Rob Pike (with an awesome jacket) at Gopherfest.

As of now – we’ve added 2 new offices to Iron: one in Las Vegas, Nevada, the other in Tokyo, Japan. We’ve hired new Engineers and Customer Success and are continuing to hire. Let us know if you have any interest in joining our team!

What’s to expect moving forward?

New graphs providing quick ways to visualize historical worker concurrency

The short answer is, things are going to get a lot better.  We’ve been very busy since the acquisition.  There have been a lot of bug fixes, improvements to internal tooling, and we’ve added concurrency graphs to help provide more insight into the system.

Near term, we are committed to continuing and ramping up development in our entire product line. This includes better performance and reliability, a new user interface, granular metrics reporting such as concurrency graphs, streamlining customer support and putting new systems in place to better track feature requests and bug reports, and bug fixes throughout our web applications.

Long term, as it relates to products, we are being guided by 2 core principles:
Open Source, and Hybrid Cloud Deployments.

I’m excited about the future, and getting to know all of you! We will have more exciting news to announce in the coming months.  Please feel to reach out to us and stay tuned!

Dylan Stamat
GM, Iron.io

In Books: The San Francisco Fallacy

One of Iron’s original investors, Jonathan Siegel, released a book this week that any entrepreneur (or anybody who’s worked in the bay area) should definitely read.  It’s titled, “The San Francisco Fallacy: The Ten Fallacies That Make Founders Fail“, and Jonathan does an amazing job writing about his personal experiences in the art and science of building businesses.

It just launched yesterday and is being offered for $2.99 this week only (see the link above).  We highly recommend grabbing a copy, and what follows is a great excerpt from the book.  Enjoy!

“It’s all about the tech.”

The Tech Fallacy is perhaps the most pervasive fallacy in the tech world. It is endemic and insidious—perhaps inextricable. It first tripped me up as a teenager in my very first tech venture—but that wasn’t enough to cure me, for I have fallen victim to it often since then.

The Tech Fallacy says it’s all about the tech. Tech is the be-all and end-all of what we do. Get the tech right and the rest will follow.

This belief is deeply, badly wrong—as I first discovered in my teens.

How I Failed as a Pornographer

My first tech business was a kind of online forum. It was called a bulletin board system, where members could chat and share software. It failed.

I launched my second online business two years later. It was another bulletin board system where members could chat and share software. Oh, and they could download porn.

I have my parents to thank for my incipient career as a pornographer. My father bought me my first computer in 1989, during one of his periods of regular employment.

It was an IBM clone with 640 kilobytes of ram and a 20-megabyte hard disk that weighed at least ten pounds. It had less power and memory than today’s inkjet printer.

The PC had a menu of random shareware on it: one of the most popular was called Lena.exe. It was just a grainy, scanned image of a Playboy Bunny (albeit fully clothed). You ran the program, and it pushed the pixels slowly onto the screen. It took minutes to load the full picture.

I soon outgrew the menu on the machine, and then I went exploring. The operating system, MS-DOS 3.0, came with a manual. I read it. I learned every command. I saw that there were things called “batch files.” I broke them open. This broke the computer. I watched it being fixed. I learned how to fix it myself (which was useful, because I kept breaking it). I learned how to write batch files. 

I did regular teardowns of my machine. The pieces were on big chips with big pins and on full-size circuit boards over a foot in length. With two screwdrivers, I could unscrew, unstrap, and pry apart everything but the few capacitors and resistors soldered to the emerald-green, silicon circuit boards.

Computers were so young then that it wasn’t clear to us what could go wrong, or why things broke. Disks would stop working and then work again. Displays wouldn’t display in one mode, but work in another. Reset a switch or copy a file and all would be mysteriously better. When something went wrong, it took laborious practice, by trial and error, to find the source of the problem.

This early digital technology was, in fact, fantastically unpredictable. It seemed magical that it worked at all, and as I developed my understanding of how it did work, my respect for that underlying magic increased.

I got into code. Like Neo learning to watch the Matrix, at first I just saw scrolling screens of ostensibly incomprehensible characters; gradually, I began to see patterns and life in them.

I started to search out greater challenges and discovered the bulletin board system (BBS) – a rudimentary precursor to the Internet. A modem was used to dial into a BBS at the cost of a normal call. The BBS allowed you to create a user profile, message others, chat in forums, download free software (shareware), and play games such as Trade Wars—a cheesy, text-based, space-frigate game.

The bulletin boards were a fertile environment for viruses, which spread easily via shareware. As a result, one of the highly sought early pieces of shareware was McAfee’s Virus Scanner, created by John McAfee in the 1980s.

McAfee uploaded his homemade virus scanner from his home to a local BBS, and it spread.
But McAfee’s shareware was also a currency in itself. If you met someone on a BBS and she mentioned that she had McAfee 2.052, and you had McAfee 2.088, then you had currency to trade with.

The bulletin boards placed tight restrictions on how many files you could download: typically, you had to upload one file to be allowed to download three – ensuring sustainability and growth for the BBS. So if you had some shareware that a BBS didn’t have, that would allow you to download three new pieces of shareware. And you could then use that to get new shareware from other bulletin boards.

But this was a different era of telecommunications, before cell phones. Landline calls within your local area were free, but long-distance calls were expensive. So if the BBS was locally-based, you could dial in for free; if it was further away, the cost would quickly get prohibitive – especially as downloads could take hours.

This created a market for more locally available software—a shareware broker. Local bulletin boards would set up to fill this market gap by downloading shareware from a distant BBS and providing it to local users for a subscription fee.

Most of these subscription bulletin boards provided a minimal free allowance to nonsubscribers, and because there was often more than one BBS within your toll-free zone, it was possible to seek out and trade shareware between boards. It was a classic network effect, with shareware spreading rapidly and efficiently at very low cost.

Then I saw an ad for a BBS in Sacramento that charged $60 per year and had twenty thousand subscribers. I thought I had misread it—$120,000 per year? For running a BBS—that is, for keeping a computer and a modem plugged in?

“I could do that!” I thought. And so I did.

Before long, I was hovering in front of my monitor late into the night, watching users work away on my BBS. In those days, you could see the screen your users saw and what they typed. “Analytics” meant staring at the monitor and watching what they were doing.

 I added a notice to the BBS that came up upon login that said I would accept donations. In June 1991, I got my first check in the post for $20.

I thought it would be the first $20 of $120,000. But it turned out to be one of the only checks I received. And it took me another year, and the onset of puberty, to realize that the distant BBS in Sacramento had another section in its files area—one I hadn’t previously discovered.

This wasn’t freeware. It was photos. Lots and lots and lots of photos. Salacious, compromising, illicit photos. There were even a few with crude, jiggly animations of bits bobbing to and fro.

The clue was in the ad, but I had missed it. I had thought the “XXX” was just some elementary formatting.

I had been duped by my prepubescent naivety. It wasn’t the pleasure of using my BBS that people were willing to pay for; it was an altogether more adult pleasure.

My excitement for the BBS drained. I shut it down and asked my mother to help me find a job. She took me to McDonald’s.

I came home in despair, sat down at the kitchen table, and took up the phone book. This was in the Bay Area, so there were pages of computer companies. I started calling. But I was a kid and nobody took me seriously. I kept calling.

Finally, somebody listened, invited me in for a chat, and eventually offered me a job. The company was called ZOZ Computers. I had cold-called my way right through the phone book.

Months into this first employment, I told my new boss about my BBS failure. I wanted to beat my competitors at their own game.

“What’s stopping you?” he asked.

“I’m too young to buy porn,” I said.

“I’ll buy it for you,” he replied. 

He ordered a set of CDs with porn images. I bought a six-disc CD changer for my computer and had three phone lines installed in my bedroom with modems. My mother was working long hours at the time and didn’t notice.

I had built myself a 386 computer and put my old 286 to work answering the phones. One Saturday in 1993, I announced my new BBS via a $15 ad in the local computer trader paper.
Like sixteen-year-olds all over the United States that weekend, I spent a lot of time in my bedroom because of porn. But I suspect there were few others—if any—whose interest was more entrepreneurial than voyeuristic. Though I was a voyeur too, in a sense—stalking my users as they navigated my BBS.

From the moment the red LED lights on the modems first lit up and the modem started to whir, signaling an incoming call, I was hooked to my screen, fascinated by what those callers were doing. When the lights came on, I felt a surge of pride and accomplishment; when they hung, indicating that the system had crashed, I felt a profound sense of failure. 

Users could get three photos a day for free, limited to ten a month, and they were limited to two hours in total online in a month. If they attempted to exceed that, they’d get a message: subscribe.

In order to become a subscriber, they had to download a pack of documents and sign and return them to me with a check: $35 per year. I remember watching my first pack being downloaded and the buzz of thinking, here comes my first customer.

I was aiming for one thousand subscribers. I had the latest tech, decent design, and a good stock of images. But six weeks later, I had made just over $400.

I had no ability to charge credit cards and was relying on people to send checks. Not having a credit card myself, because I was fifteen, it hadn’t occurred to me that I’d need to process credit cards.

Still, I was giddy with success and wanted to share it. I confided in one of the adults I respected, my mother’s landlord. He told her. She wasn’t so thrilled that her teenage son was a pornographer (and she wasn’t so impressed by the distinction between a pornographer and a pornography trader, either). She told me to shut it down. Between that and the too-slow income stream, I decided not to argue my First Amendment rights. At age sixteen, I’d notched up my second tech failure.

The Tech Fallacy Revisited

Selling porn taught me about the Tech Fallacy. I had believed that building great technology must mean that you’re building a great business. That it was all about the tech.

But selling porn taught me that the raison d’être for any business is to give the customer what he wants. He doesn’t want the tech; he wants what the tech can deliver. The tech is just the means to an end.

I thought I could make money from a well-built-and-run bulletin board system; however, decoding those ads in the computer trader papers with their XXXs made me realize that the market was more interested in the XXX than the BBS.

I love good tech. But I’ve learned to follow the good business. It’s a better path.

Take two rival companies. Each is armed with $1 million in investments. One spends $900,000 on its technology development, with $100,000 reserved for going to market (i.e., customer development, sales, and marketing). The other spends $100,000 on technology and $900,000 on going to market.

Who wins? The market-driven one does. It’s not the better product that wins; it’s the product that best knows how to reach its market.

If a thriving company made you its CEO and you decided to let go of its sales and marketing divisions to focus more on the technology, the board would fire you. But walk into almost any two-year-old funded startup, and you’ll see a growing development team budget and a speck, if any, allotted to sales and marketing.

Imagine an upstart competitor trying to challenge an entrenched leader without a sales and marketing division—it would be like a one-legged man in an ass-kicking contest.

Yet in the startups that I encounter, if the company has a team of ten, there’ll be nine developers and just one person who is business driven. Contrast that with companies that have gone public: you’ll see ninety salespeople for ten developers.

Why is this? Partly, it’s intrinsic. People who love what they do often prefer to do it to the exclusion of other things and may not even realize they’re doing this. Tech companies tend to be founded by people who love tech. A single-minded focus on the tech is to be expected but guarded against.

But it’s also a feature of the zeitgeist—the spirit of the times. This takes us back to the first dot-com era. As we’ll see later, the dot-com bubble was characterized by a focus on the idea to the exclusion of all else—even the tech.

When that bubble burst, it left a bad taste in people’s mouths, especially in the investment community. Tech startups acquired the reputation of being charlatans—all talk, no substance.

This perception created a pendulum swing: today, the emphasis in the startup market is often on developing innovative, hardcore technology, with a consequent failure to consider other crucial (maybe more crucial) aspects of the business.

There is a happy medium. Tech is helping to redefine how the world works—how we work and play, find our soul mates and flings, tell our stories, and hail a ride. Tech is required to catalyze these shifts and disruptions. We all love good tech.

But the winners will be those who build the best businesses, not the best tech.

Securing Serverless

Guy Podjarny published a great blog post discussing the Serverless space from a security perspective. I highly recommend reading it as it touches on some great points, going over both the security benefits and possible risks.

Two points he made definitely stood out to me, and the first was the concept of a greater attack surface. When I explain FaaS (Functions as a Service) to people, many immediately equate a function as being synonymous to a simple API endpoint. To a degree, they are correct. Then, what’s the difference, and why should we look at security in regard to both from different perspectives? I believe the differentiator becomes how the endpoint is exposed, and what its purpose is.

Standard API endpoints will often belong to a broader application or set of microservices that reside behind a shared layer of security. This could be dedicated network hardware, hardened reverse proxies, etc. As a security minded developer, you develop your endpoint and consider the possible client side attack vectors (Guy points to the OWASP Top Ten guide (Open Web Application Security Project) which is a great place to start; Thomas Ptacek also has a great list here); then possibly move on to write another endpoint, which will share these concerns, all the time relying on that first level layer of security.

When you start developing a suite of functions, things can start to get fragmented. Dependencies start to change between functions, software versions might differ, and the ways the functions are triggered may require different configurations on the network/gateway layer.

The second point that stood out was around monitoring: There are countless battle-tested monitoring solutions out there, but the way functions are deployed and used within an underlying architecture might leave them completely out of their scope. Guy makes a great point about how many of these products are agents that rely on long running processes to keep an eye on and collect from. In order to monitor functions, different techniques need to be implemented for short-lived and hot processes.

All of these are great problems to have and point to fast moving innovation in already fast moving industries. You’ll see most vendors and platforms already tackling these issues and building solutions into their products. This space is still young! Here at Iron, we’re committed to helping make IronFunctions become a respected open source solution for delivering FaaS to wherever you want to deploy it.

Delivering on the Promise of Multicloud Lambda-like Functionality

multicloud-takeoff

In February, we launch a beta called Project Kratos. It promised to bring Lambda-like functionality to any cloud – public, private, hybrid or on-premises. As we quickly approach Q4, February seems like a long time ago, but so much has happened since then.

Over the past seven months, serverless computing has gained momentum as more than just the hot topic of the moment. Because it allows enterprises to build and deploy applications and services at scale on flexible platforms that abstract away physical infrastructure, it’s quickly becoming a must have for the modern enterprise. It will soon be a competitive advantage for those already implementing it.

Our journey with serverless has also moved from a project announcement full of promises to the solution that is widely available today.  First, in April, we announced the general availability of its multicloud solution. Since then, we’ve systematically partnered with leading cloud providers to support multicloud development.

In April, Iron.io announced its partnership with Mirantis to bring event-driven, serverless functionality to the OpenStack community. The joint solution enables enterprise developers using OpenStack to deliver applications and services faster through the serverless experience provided by Iron.io.

In May, Iron.io announced its collaboration with Cloud Foundry Foundation, home of the industry-standard multi-cloud platform, to integrate the Iron.io API with the Cloud Foundry platform.

In June, Iron.io brought the serverless experience to Red Hat OpenShift — a pairing that provided users with an end-to-end environment for building and deploying applications at scale, without the headaches of complex operations.

And in August, Iron.io announced its strategic partnership with Mesosphere, enabling microservices and serverless computing for modern data centers. Joint customers using Mesosphere’s Data Center Operating System (DC/OS) with Iron.io could experience enhanced flexibility to develop their hybrid cloud strategy and run distributed job processing across heterogeneous environments.

Yesterday, we added an announcement that serverless functionality is now available on Cloud Foundry and Iron.io supports Diego as a runtime for Iron.io workloads. Iron.io is now able to be deployed on top of Cloud Foundry, run inside of Cloud Foundry, and scale out Cloud Foundry containers.

Wow. I was here for all of it and it still seems like a lot, but it’s only the beginning. The Iron.io team is committed to bringing a serverless experiences to developers and companies far and wide.

If you want information on how we define serverless and why the world is moving this way, check out Chad Arimura’s presentation Best Practices for Implementing Serverless Architecture from the O’Reilly Software Architect conference or Dave Nugent and Ivan Dywer’s great Fireside Chat about serverless computing.

Iron.io and Mesosphere’s strategic partnership enables microservices and serverless computing for modern data centers

Iron_Mesos_ALots of people have been asking about how they can use Mesosphere and Iron.io together. It makes sense, because Iron.io’s workload processing engine and hybrid microservices architecture are perfectly suited to take advantage of Mesosphere’s Datacenter Operating System (DC/OS), the first open and comprehensive platform for building, running and scaling modern enterprise applications.

(more…)

What is Serverless Computing and Why is it Important

serverless-largeServerless computing has blown up in the past 6 months and along with all the excitement is a lot of questions. I’ll attempt to address some of the questions and talk about the pros and cons of serverless.

We created Iron.io five years ago to solve the problems that serverless computing solves. In fact we built the company on this premise. We were trying to solve a problem we were having at our previous company, where we had to setup a DIY data processing system for each one of our customers, then manage and monitor all these servers. We wanted (needed!) a system that allowed us to write code that could crunch a lot of data and have that run on an as needed basis on a system where we wouldn’t have to think about or manage the servers. And we wanted to be able to use that same system for all of our clients and have it easily scale out to many more.

So like any self respecting engineers, we went ahead and solved our own problem and that solution became Iron.io. (more…)

Navigating the Cloud Foundry Ecosystem of Ecosystems: An ISV Perspective

By Cloud Foundry

Even Neil Degrasse Tyson would be impressed with how quickly and effectively the Cloud Foundry community has evolved into a fully organic ecosystem of ecosystems. This is because forward thinking organizations are putting a stake in the ground that Cloud Foundry will be the foundation for all future software development and deployment. In this multi-cloud platform-centric world, where do the ISVs fit?

As a relatively new member of the Cloud Foundry Foundation, Iron.io has first hand experience how to communicate, collaborate, and contribute with the members of the community to extend the platform where applicable and satisfy customer needs when requested. They key is knowing what you bring to the table, and doing it the cloud native way.

In this session, Ivan Dwyer shares a few anecdotes from Iron.io’s experiences working with Cloud Foundry community partners across integration engineering, co-marketing, and joint sales efforts – what’s worked, what hasn’t, and what’s coming next.