1 00:00:02,520 --> 00:00:07,080 Speaker 1: Bloomberg Audio Studios, podcasts, radio news. 2 00:00:07,520 --> 00:00:10,840 Speaker 2: IBM and Grock are announcing a strategic partnership to give 3 00:00:10,880 --> 00:00:15,760 Speaker 2: clients ultra high speed, low latency AI capabilities via Grock's 4 00:00:15,840 --> 00:00:18,720 Speaker 2: inference technology. For more and how this partnership is going 5 00:00:18,760 --> 00:00:22,279 Speaker 2: to provide greater access to the full potential of enterprise AI, 6 00:00:22,400 --> 00:00:25,160 Speaker 2: We're joined by Rob Thomas, Senior Vice president of Software 7 00:00:25,280 --> 00:00:28,440 Speaker 2: and Chief Commercial Officer at IBM, and Jonathan Ross, CEO 8 00:00:28,560 --> 00:00:31,160 Speaker 2: and founder of GROC and Jonathan, I want to start 9 00:00:31,160 --> 00:00:32,920 Speaker 2: with you. You know, the way that I look at this 10 00:00:33,680 --> 00:00:36,600 Speaker 2: is it's a very interesting go to market channel for you, 11 00:00:36,960 --> 00:00:39,320 Speaker 2: a sales channel. Think about all of the clients that 12 00:00:39,400 --> 00:00:42,760 Speaker 2: IBM has and how you've tried to grow the company. 13 00:00:42,800 --> 00:00:46,159 Speaker 2: Explain how people will access LPUS through this or through 14 00:00:46,200 --> 00:00:47,120 Speaker 2: the cloud matrix. 15 00:00:48,159 --> 00:00:51,600 Speaker 1: Absolutely, it's an extraordinary opportunity for both of us. IBM 16 00:00:51,720 --> 00:00:55,080 Speaker 1: is going to have their sellers sell Grock SKEW and 17 00:00:55,120 --> 00:00:58,400 Speaker 1: so now you'll be able to directly access our speed. 18 00:00:58,920 --> 00:01:01,959 Speaker 1: The advantage which is that we offer you could think 19 00:01:01,960 --> 00:01:05,080 Speaker 1: of it a little bit like offering broadband in the 20 00:01:05,120 --> 00:01:07,640 Speaker 1: era where dial up wasn't fully rolled out and people 21 00:01:07,680 --> 00:01:10,160 Speaker 1: were still trying to connect to the internet. Our lpus 22 00:01:10,200 --> 00:01:13,840 Speaker 1: are just significantly faster, but we also keep the cost down. 23 00:01:13,959 --> 00:01:17,360 Speaker 1: Just imagine if you were to offer broadband and you 24 00:01:17,480 --> 00:01:21,840 Speaker 1: charged more per bit of data that was sent over 25 00:01:21,840 --> 00:01:25,160 Speaker 1: the line, it would be on economical Broadband increases a demand. 26 00:01:25,840 --> 00:01:29,639 Speaker 1: With agentic use cases, it's particularly important to reduce the speed. 27 00:01:29,840 --> 00:01:31,880 Speaker 1: You don't want to ask a question, wait ten minutes 28 00:01:31,959 --> 00:01:34,600 Speaker 1: later and come back. You'd rather get the answer under 29 00:01:34,600 --> 00:01:36,240 Speaker 1: a minute. 30 00:01:36,760 --> 00:01:41,319 Speaker 2: Rob Under this arrangement with Jonathan, does IBM make any 31 00:01:41,319 --> 00:01:45,800 Speaker 2: sort of financial investment into GROC or is there some 32 00:01:45,880 --> 00:01:49,240 Speaker 2: kind of sales or revenue split. Explain the economics of 33 00:01:49,280 --> 00:01:50,560 Speaker 2: this deal for you guys. 34 00:01:50,880 --> 00:01:51,560 Speaker 3: Big pictures. 35 00:01:51,560 --> 00:01:54,880 Speaker 4: We have a lot of momentum in AI with Watson X, 36 00:01:55,000 --> 00:01:58,200 Speaker 4: as we said on our earnings last quarter seven and 37 00:01:58,200 --> 00:02:01,080 Speaker 4: a half billion dollars as a book of business, and 38 00:02:01,280 --> 00:02:04,560 Speaker 4: we're trying to solve the client problem of how do. 39 00:02:04,680 --> 00:02:06,400 Speaker 3: They deploy AI faster. 40 00:02:06,920 --> 00:02:10,280 Speaker 4: So this partnership is all about what Jonathan said, which 41 00:02:10,320 --> 00:02:13,799 Speaker 4: is five x performance at twenty percent of the cost. 42 00:02:14,200 --> 00:02:17,200 Speaker 4: We've seen it with Watson X running on GROC and 43 00:02:17,240 --> 00:02:20,200 Speaker 4: so we will be distributing GROC as part of our 44 00:02:20,240 --> 00:02:23,720 Speaker 4: go to market and there's a revenue share as part 45 00:02:23,760 --> 00:02:26,720 Speaker 4: of that. We are really excited because we've seen clients 46 00:02:26,880 --> 00:02:30,440 Speaker 4: already getting an impact to how they're deploying AI because 47 00:02:30,440 --> 00:02:32,280 Speaker 4: of the integration of our technology together. 48 00:02:33,720 --> 00:02:35,640 Speaker 5: Let's talk about that, Rob a little bit more. Because 49 00:02:35,760 --> 00:02:37,480 Speaker 5: you're the man who's in charge of the software business, 50 00:02:37,480 --> 00:02:40,600 Speaker 5: you're also really responsible for the world revenue and profitability 51 00:02:40,680 --> 00:02:43,440 Speaker 5: of your company. So help us understand why grow was 52 00:02:43,440 --> 00:02:46,320 Speaker 5: the obvious choice. How is it helping your clients get 53 00:02:46,360 --> 00:02:48,040 Speaker 5: outs as faster? On the inference side of. 54 00:02:48,040 --> 00:02:52,480 Speaker 4: Things, we looked at every possibility in the market, and 55 00:02:53,160 --> 00:02:57,120 Speaker 4: the clients are looking for significant performance, so some of 56 00:02:57,160 --> 00:03:00,000 Speaker 4: that changes how your call center operates or how you're 57 00:03:00,080 --> 00:03:04,080 Speaker 4: supply chain runs, and then you combine that with a 58 00:03:04,120 --> 00:03:05,200 Speaker 4: fraction of the cost. 59 00:03:05,600 --> 00:03:07,120 Speaker 3: Suddenly the economics makes sense. 60 00:03:07,400 --> 00:03:10,280 Speaker 4: AI does have a cost problem, and we think this 61 00:03:10,320 --> 00:03:12,959 Speaker 4: breaks through that. And IBM we've said we're going to 62 00:03:13,040 --> 00:03:15,880 Speaker 4: drive four and a half billion of productivity by the 63 00:03:15,960 --> 00:03:19,120 Speaker 4: end of this year. That's another example of AI truly 64 00:03:19,200 --> 00:03:21,480 Speaker 4: having an impact. And the number one question I get 65 00:03:21,480 --> 00:03:23,680 Speaker 4: from clients now is how are you doing that at 66 00:03:23,720 --> 00:03:26,840 Speaker 4: IBM and can you help us do that? And we 67 00:03:26,880 --> 00:03:29,680 Speaker 4: think the combination of IBM and GROC can make this 68 00:03:29,760 --> 00:03:31,440 Speaker 4: a reality for any company. 69 00:03:32,880 --> 00:03:36,080 Speaker 5: Well, let's dig into that a little bit now with you, Jonathan, 70 00:03:36,120 --> 00:03:39,000 Speaker 5: because the integration with what's the next orchestraate? What does 71 00:03:39,000 --> 00:03:41,480 Speaker 5: that look like on your side? How does that happen 72 00:03:41,520 --> 00:03:42,400 Speaker 5: and happen seamlessly? 73 00:03:43,680 --> 00:03:48,840 Speaker 1: So the wantsonex API is available for anyone to use today, 74 00:03:48,880 --> 00:03:52,360 Speaker 1: It'll be invisible to most users. It'll simply work. We 75 00:03:52,400 --> 00:03:55,200 Speaker 1: have a compatible API and this is something we've been 76 00:03:55,200 --> 00:03:58,680 Speaker 1: working on. We will also work on some lower level 77 00:03:58,720 --> 00:04:03,280 Speaker 1: integrations with VLM, which is a technology that IBM is 78 00:04:03,360 --> 00:04:06,680 Speaker 1: very deeply involved in. But it should just be transparent. 79 00:04:06,720 --> 00:04:08,840 Speaker 1: You should just get more speed. Just imagine one day 80 00:04:08,880 --> 00:04:11,560 Speaker 1: you come home, you had dialogue and now you have 81 00:04:11,640 --> 00:04:13,400 Speaker 1: broadband in a cost less. 82 00:04:15,160 --> 00:04:18,719 Speaker 2: Rob where's the demand coming from on your side, like IBM, 83 00:04:18,800 --> 00:04:22,200 Speaker 2: Granite or some other agentic workload that they want to 84 00:04:22,279 --> 00:04:25,679 Speaker 2: run using the GROC lpus. Are these public sector names? 85 00:04:25,720 --> 00:04:29,520 Speaker 2: Are they private sector SMEs? I'm trying to understand who 86 00:04:29,560 --> 00:04:30,480 Speaker 2: you're serving with it. 87 00:04:30,960 --> 00:04:34,080 Speaker 4: As often happens, I would say financial services have been 88 00:04:34,360 --> 00:04:37,240 Speaker 4: early adopters. But the thing that has changed in the 89 00:04:37,240 --> 00:04:40,320 Speaker 4: market in the last six months is everything is moving 90 00:04:40,440 --> 00:04:44,280 Speaker 4: to multimodel. We have IBM models that we open source, 91 00:04:44,320 --> 00:04:47,680 Speaker 4: which are the Granite models. We announced the partnership with Anthropic. 92 00:04:48,120 --> 00:04:50,880 Speaker 4: We have a partnership with Mistraw and Lama, just to 93 00:04:50,960 --> 00:04:53,960 Speaker 4: name a few. What is incredible about what Jonathan and 94 00:04:54,000 --> 00:04:57,840 Speaker 4: team have built is any model can run and get 95 00:04:58,000 --> 00:05:00,040 Speaker 4: instant improvement run. 96 00:05:00,080 --> 00:05:01,279 Speaker 3: The LP used from ROCK. 97 00:05:01,400 --> 00:05:03,839 Speaker 4: So I think this is a combination of a multimodel 98 00:05:03,880 --> 00:05:07,360 Speaker 4: world accelerating inference with ROCK. 99 00:05:07,800 --> 00:05:09,240 Speaker 3: I think this is a great combination. 100 00:05:11,040 --> 00:05:15,920 Speaker 2: Jonathan, does this capacityority exist or are you supply constraints still? 101 00:05:15,920 --> 00:05:17,800 Speaker 2: You've got to go out and build it either in 102 00:05:17,880 --> 00:05:20,039 Speaker 2: Saudi Finland here in the States. 103 00:05:21,240 --> 00:05:23,960 Speaker 1: So the entire world is supply constrained, and I would 104 00:05:24,000 --> 00:05:26,400 Speaker 1: actually expect that to continue for at least the next 105 00:05:26,440 --> 00:05:29,400 Speaker 1: five to ten years when it comes to AI. Our 106 00:05:29,440 --> 00:05:31,719 Speaker 1: advantage is that we have a supply chain that actually 107 00:05:31,839 --> 00:05:35,400 Speaker 1: ramps much faster, so customers will be able to come 108 00:05:35,400 --> 00:05:38,560 Speaker 1: to IBM, put in an order and we will be 109 00:05:38,560 --> 00:05:41,480 Speaker 1: able to fulfill that faster than you would be able 110 00:05:41,480 --> 00:05:45,400 Speaker 1: to with other technologies. But the supply constraints that are real, 111 00:05:45,640 --> 00:05:47,839 Speaker 1: and this is another reason to start working with IBM 112 00:05:47,960 --> 00:05:50,719 Speaker 1: sooner the sooner you get access to that capacity, the 113 00:05:50,760 --> 00:05:52,440 Speaker 1: sooner you're going to have it. I can't tell you 114 00:05:52,480 --> 00:05:55,600 Speaker 1: how many startups come to us and other companies come 115 00:05:55,640 --> 00:05:58,960 Speaker 1: to us and they are looking for capacity, because some 116 00:05:59,000 --> 00:06:01,760 Speaker 1: of them are actually growing twenty or even thirty percent 117 00:06:01,880 --> 00:06:05,839 Speaker 1: per week or per month, which is an astronomical growth rate. 118 00:06:06,400 --> 00:06:10,440 Speaker 1: But by approaching us early, we can build to your needs. 119 00:06:12,080 --> 00:06:14,880 Speaker 5: You were just mentioning Rob about all the partnerships you 120 00:06:14,920 --> 00:06:18,080 Speaker 5: have when it comes to llms and the offerings that 121 00:06:18,120 --> 00:06:21,880 Speaker 5: you're intertwining within yours. Will you go to others to 122 00:06:22,080 --> 00:06:24,719 Speaker 5: ensure that inference is as fast as possible or is 123 00:06:24,720 --> 00:06:26,000 Speaker 5: it this exclusive with GROC. 124 00:06:26,680 --> 00:06:30,240 Speaker 4: We are open to working with anybody in the ecosystem 125 00:06:30,240 --> 00:06:33,720 Speaker 4: of AI around what we're doing specifically on the acceleration 126 00:06:33,800 --> 00:06:36,640 Speaker 4: with GROC. We want to lean into this partnership. That's 127 00:06:36,680 --> 00:06:39,799 Speaker 4: why this is the one that we've announced today, because 128 00:06:40,040 --> 00:06:44,359 Speaker 4: we have confidence working together with GROC. As Jonathan mentioned, 129 00:06:44,400 --> 00:06:46,960 Speaker 4: we're also enabling some of the lower level technologies and 130 00:06:47,000 --> 00:06:50,520 Speaker 4: open source like VLM. So this is the right place 131 00:06:50,560 --> 00:06:52,599 Speaker 4: to be when it comes to inference. But when you 132 00:06:52,600 --> 00:06:55,440 Speaker 4: think broadly about what's happened in AI, we have many 133 00:06:55,440 --> 00:06:59,000 Speaker 4: companies working with us on agents. Last week we announced 134 00:06:59,040 --> 00:07:02,719 Speaker 4: SMP Global is now running on Watsonnext Orchestrate as an example. 135 00:07:02,960 --> 00:07:04,599 Speaker 3: So we're always open to new partnerships. 136 00:07:06,040 --> 00:07:09,400 Speaker 5: And let's just talk about Jonathan the go to market 137 00:07:09,400 --> 00:07:13,080 Speaker 5: strategy here of teaming with the age old Juggernau that 138 00:07:13,280 --> 00:07:18,760 Speaker 5: is IBN, that has so many deep relationships across global enterprises. 139 00:07:19,320 --> 00:07:21,559 Speaker 5: But is that how you're going to work this going forward? 140 00:07:21,640 --> 00:07:24,840 Speaker 5: It is teaming up with companies that have those legacy relationships, 141 00:07:24,920 --> 00:07:26,280 Speaker 5: or do you still go out there and win the 142 00:07:26,320 --> 00:07:27,040 Speaker 5: business yourself. 143 00:07:28,160 --> 00:07:29,680 Speaker 1: So I would say this is a peanut butter and 144 00:07:29,760 --> 00:07:33,360 Speaker 1: jelly sort of relationship in the sense that oftentimes when 145 00:07:33,400 --> 00:07:37,360 Speaker 1: we meet with sea level executives, those sea level executives 146 00:07:37,440 --> 00:07:41,119 Speaker 1: turn to their tech teams and ask them to evaluate GROC. 147 00:07:41,480 --> 00:07:44,680 Speaker 1: And I've been in meetings where the CTO did that 148 00:07:45,240 --> 00:07:48,000 Speaker 1: and the response from the person is I already use GROC. 149 00:07:49,000 --> 00:07:51,560 Speaker 1: It's my default for everything. So we already have the 150 00:07:51,560 --> 00:07:54,480 Speaker 1: bottoms up. We have two point three million developers already 151 00:07:54,480 --> 00:07:57,240 Speaker 1: building on us. For comparison, open Aie has four million 152 00:07:57,880 --> 00:08:01,760 Speaker 1: now going to those deep relationships from IBM, and the 153 00:08:01,800 --> 00:08:04,560 Speaker 1: fact that IBM is a trusted partner who's been delivering 154 00:08:04,600 --> 00:08:07,760 Speaker 1: for decades. You put those two together and that's an 155 00:08:07,800 --> 00:08:09,200 Speaker 1: amazing go to market motion. 156 00:08:11,360 --> 00:08:13,840 Speaker 5: Wow, it's been great having you both on to talk 157 00:08:13,880 --> 00:08:16,000 Speaker 5: about the go to market strategy. Jonathan Ross, CEO Grock, 158 00:08:16,040 --> 00:08:18,880 Speaker 5: of course, Rob Thomas of Senior Vice president of Software 159 00:08:18,960 --> 00:08:21,240 Speaker 5: over at IBM. We thank you both very much. 160 00:08:21,240 --> 00:08:21,640 Speaker 3: Indeed,