Every spring, IBM hosts our enormous Think Conference with days of lightning sessions, labs, workshops, presentations, and predictions from some of the smartest people in the world. Every year I come away amazed by what our customers are conceiving and achieving and by the role that we at IBM are able to play.
This year was no different. As before, thousands of attendees traded ideas and collaborated on complex problems in real time. They listened attentively. They inspired each other. They expanded and accelerated their previous plans.
.. But that’s all just par for the course at an annual Think Conference.
What struck me in particular this time went beyond all that.
This time, I had a deeper sense that our strategy on behalf of clients and customers was resonating in new ways — more concretely and more urgently. It’s one thing to know that you’re living in extraordinary times. It’s another to understand exactly what makes them so extraordinary. The last decade has delivered a torrent of rapid and unpredictable change. And while the pace and scale of the change continues undiminished, I think we can see more clearly now what it takes to ride the wave. That was the focus of my keynote at this year’s Think: https://ibm.co/THINK2019_Dinesh-Nirmal_Keynote.
First, it’s a change in attitude and culture. When it comes to data, after years of playing defense, clients can finally believe in a real path to stability, clarity, and flexibility — even when faced with complicated architecture, legacy data silos, and archaic processes, even when faced with a data estate that continues to shift and grow under your feet. Modernizing your data estate means establishing a culture that cultivates talent, encourages change agents, and sows the seeds for new data initiatives. As Rob Thomas observes, market leaders are likely to be the “organizations that drive mass experimentation in AI.”
Second is a new attention to architecture, in particular an embrace of data virtualization, microservices, and orchestration that lets you replatform and refactor where necessary to make data and data access cloud-native.
Third is leveling up to technology that lets you manage data end-to-end regardless of source, flow rate, or final use. Getting there requires supporting technologies for collaboration, transparent governance, and self-service for your data science teams. A big hint here: the best-in-breed (and best bargains) are almost always built on top of open source.
How will you know you’ve got it right? You’ll know when your system at large is capable of two things in particular:
1. It functions seamlessly on a multicloud architecture — on-prem, private, and public clouds, even across vendors.
2. It lets your data science teams collect, organize, and analyze any available data, giving them full reach to unlock business insights with AI and machine learning.
You could try building such a system in-house with the team you have on hand. Many enterprises have tried and some have succeeded. But I’ll end with a plug for an alternative approach, an IBM offering that many clients have already embraced: IBM Cloud Private for Datais a full-featured data platform that puts into practice the principles I’ve outlined above. Established on a foundation of open source software and data virtualization, we built it for high performance and transparency on multicloud. As Forrester recently noted: “With IBM Cloud Private for Data, IBM has pre-integrated capabilities that allow clients to be productive in a week or less."
I’ll include some links below for those who want to learn more. In the meantime, I want to acknowledge all of those who attended Think this year. Thank you for the inspiration and the energy. I know we’ll do great things in 2019 — and beyond.
IBM Cloud Private for Data (ICP4D) product details:
Experience it live:
Dinesh Nirmal – Vice President, IBM Data and AI Development
Follow me on Twitter: @dineshknirmal