Thursday, July 31, 2025

Raiza Martin on Constructing AI Functions for Audio – O’Reilly

Generative AI in the Real World

Generative AI within the Actual World

Generative AI within the Actual World: Raiza Martin on Constructing AI Functions for Audio



Loading





/

Audio is being added to AI in all places: each in multimodal fashions that may perceive and generate audio and in functions that use audio for enter. Now that we are able to work with spoken language, what does that imply for the functions that we are able to develop? How will we take into consideration audio interfaces—how will individuals use them, and what is going to they wish to do? Raiza Martin, who labored on Google’s groundbreaking NotebookLM, joins Ben Lorica to debate how she thinks about audio and what you possibly can construct with it.

Concerning the Generative AI within the Actual World podcast: In 2023, ChatGPT put AI on everybody’s agenda. In 2025, the problem might be turning these agendas into actuality. In Generative AI within the Actual World, Ben Lorica interviews leaders who’re constructing with AI. Be taught from their expertise to assist put AI to work in your enterprise.

Take a look at different episodes of this podcast on the O’Reilly studying platform.

Timestamps

  • 0:00: Introduction to Raiza Martin, who cofounded Huxe and previously led Google’s NotebookLM group. What made you assume this was the time to commerce the comforts of huge tech for a storage startup?
  • 1:01: It was a private resolution for all of us. It was a pleasure to take NotebookLM from an concept to one thing that resonated so broadly. We realized that AI was actually blowing up. We didn’t know what it will be like at a startup, however we wished to strive. Seven months down the street, we’re having a good time.
  • 1:54: For the 1% who aren’t conversant in NotebookLM, give a brief description.
  • 2:06: It’s mainly contextualized intelligence, the place you give NotebookLM the sources you care about and NotebookLM stays grounded to these sources. Certainly one of our most typical use instances was that college students would create notebooks and add their class supplies, and it turned an knowledgeable that you possibly can discuss with.
  • 2:43: Right here’s a use case for owners: put all of your person manuals in there. 
  • 3:14: We now have had lots of people inform us that they use NotebookLM for Airbnbs. They put all of the manuals and directions in there, and customers can discuss to it.
  • 3:41: Why do individuals want a private each day podcast?
  • 3:57: There are a whole lot of totally different ways in which I take into consideration constructing new merchandise. On one hand, there are acute ache factors. However Huxe comes from a distinct angle: What if we might attempt to construct very pleasant issues? The inputs are a bit totally different. We tried to think about what the typical individual’s each day life is like. You get up, you test your telephone, you journey to work; we thought of alternatives to make one thing extra pleasant. I believe loads about TikTok. When do I take advantage of it? After I’m standing in line. We landed on transit time or commute time. We wished to do one thing novel and fascinating with that area in time. So one of many first issues was creating actually customized audio content material. That was the provocation: What do individuals wish to hearken to? Even on this quick time, we’ve discovered loads concerning the quantity of alternative.
  • 6:04: Huxe is cell first, audio first, proper? Why audio?
  • 6:45: Coming from our learnings from NotebookLM, you be taught essentially various things while you change the modality of one thing. After I go on walks with ChatGPT, I simply discuss my day. I seen that was a really totally different interplay from after I sort issues out to ChatGPT. The flip facet is much less about interplay and extra about consumption. One thing concerning the audio format made the forms of sources totally different as effectively. The sources we uploaded to NotebookLM had been totally different on account of wanting audio output. By specializing in audio, I believe we’ll be taught totally different use instances than the chat use instances. Voice continues to be largely untapped. 
  • 8:24: Even in textual content, individuals began exploring different type components: lengthy articles, bullet factors. What sorts of issues can be found for voice?
  • 8:49: I consider two codecs: one passive and one interactive. With passive codecs, there are a whole lot of various things you possibly can create for the person. The issues you find yourself enjoying with are (1) what’s the content material about and (2) how versatile is the content material? Is it quick, lengthy, malleable to person suggestions? With interactive content material, perhaps I’m listening to audio, however I wish to work together with it. Possibly I wish to take part. Possibly I need my associates to hitch in. Each of these contexts are new. I believe that is what’s going to emerge within the subsequent few years. I believe we’ll be taught that the forms of issues we are going to use audio for are essentially totally different from the issues we use chat for.
  • 10:19: What are a few of the key classes to keep away from from sensible audio system?
  • 10:25: I’ve owned so lots of them. And I really like them. My main use for the sensible audio system continues to be a timer. It’s costly and doesn’t dwell as much as the promise. I simply don’t assume the expertise was prepared for what individuals actually wished to do. It’s laborious to consider how that would have labored with out AI. Second, some of the tough issues about audio is that there is no such thing as a UI. A wise speaker is a bodily machine. There’s nothing that tells you what to do. So the training curve is steep. So now you’ve got a person who doesn’t know what they will use the factor for. 
  • 12:20: Now it could actually achieve this way more. Even and not using a UI, the person can simply strive issues. However there’s a threat in that it nonetheless requires enter from the person. How will we take into consideration a system that’s so supportive that you simply don’t should provide you with learn how to make it work? That’s the problem from the sensible speaker period.
  • 12:56: It’s fascinating that you simply level out the UI. With a chatbot it’s a must to sort one thing. With a wise speaker, individuals began getting creeped out by surveillance. So, will Huxe surveil me?
  • 13:18: I believe there’s one thing easy about it, which is the wake phrase. As a result of sensible audio system are triggered by wake phrases, they’re at all times on. If the person says one thing, it’s most likely choosing it up, and it’s most likely logged someplace. With Huxe, we wish to be actually cautious about the place we consider shopper readiness is. You wish to push a bit bit however not too far. If you happen to push too far, individuals get creeped out. 
  • 14:32: For Huxe, it’s a must to flip it on to make use of it. It’s clunky in some methods, however we are able to push on that boundary and see if we are able to push for one thing that’s extra ambiently on. We’re beginning to see the emergence of extra instruments which might be at all times on. There are instruments like Granola and Cluely: They’re at all times on, your display, transcribing your audio. I’m curious—are we prepared for expertise like that? In actual life, you possibly can most likely get essentially the most utility from one thing that’s at all times on. However whether or not shoppers are prepared continues to be TBD.
  • 15:25: So that you’re ingesting calendars, e mail, and different issues from the customers. What about privateness? What are the steps you’ve taken?
  • 15:48: We’re very privateness targeted. I believe that comes from constructing NotebookLM. We wished to verify we had been very respectful of person knowledge. We didn’t practice on any person knowledge; person knowledge stayed personal. We’re taking the identical strategy with Huxe. We use the information you share with Huxe to enhance your private expertise. There’s one thing fascinating in creating private advice fashions that don’t transcend your utilization of the app. It’s a bit tougher for us to construct one thing good, however it respects privateness, and that’s what it takes to get individuals to belief.
  • 17:08: Huxe might discover that I’ve a flight tomorrow and inform me that the flight is delayed. To take action, it has needed to contact an exterior service, which now is aware of about my flight.
  • 17:26: That’s a very good level. I take into consideration constructing Huxe like this: If I had been in your pocket, what would I do? If I noticed a calendar that mentioned “Ben has a flight,” I can test that flight with out leaking your private data. I can simply lookup the flight quantity. There are a whole lot of methods you are able to do one thing that gives utility however doesn’t leak knowledge to a different service. We’re attempting to know issues which might be way more motion oriented. We attempt to inform you about climate, about site visitors; these are issues we are able to do with out stepping on person privateness.
  • 18:38: The way in which you described the system, there’s no social element. However you find yourself studying issues about me. So there’s the potential for constructing a extra subtle filter bubble. How do you make it possible for I’m ingesting issues past my filter bubble?
  • 19:08: It comes right down to what I consider an individual ought to or shouldn’t be consuming. That’s at all times difficult. We’ve seen what these feeds can do to us. I don’t know the proper method but. There’s one thing fascinating about “How do I get sufficient person enter so I may give them a greater expertise?” There’s sign there. I strive to consider a person’s feed from the attitude of relevance and fewer from an editorial perspective. I believe the relevance of data might be sufficient. We’ll most likely check this as soon as we begin surfacing extra customized data. 
  • 20:42: The opposite factor that’s actually necessary is surfacing the proper controls: I like this; right here’s why. I don’t like this; why not? The place you inject pressure within the system, the place you assume the system ought to push again—that takes a bit time to determine learn how to do it proper.
  • 21:01: What concerning the boundary between giving me content material and offering companionship?
  • 21:09: How do we all know the distinction between an assistant and a companion? Basically the capabilities are the identical. I don’t know if the query issues. The person will use it how the person intends to make use of it. That query issues most within the packaging and the advertising and marketing. I discuss to individuals who discuss ChatGPT as their finest good friend. I discuss to others who discuss it as an worker. On a capabilities degree, they’re most likely the identical factor. On a advertising and marketing degree, they’re totally different.
  • 22:22: For Huxe, the way in which I take into consideration that is which set of use instances you prioritize. Past a easy dialog, the capabilities will most likely begin diverging. 
  • 22:47: You’re now a part of a really small startup. I assume you’re not constructing your individual fashions; you’re utilizing exterior fashions. Stroll us by privateness, given that you simply’re utilizing exterior fashions. As that mannequin learns extra about me, how a lot does that mannequin retain over time? To be a very good companion, you possibly can’t be clearing that cache each time I sign off.
  • 23:21: That query pertains to the place we retailer knowledge and the way it’s handed off. We go for fashions that don’t practice on the information we ship them. The following layer is how we take into consideration continuity. Folks count on ChatGPT to have information of all of the conversations you’ve got. 
  • 24:03: To assist that it’s a must to construct a really sturdy context layer. However you don’t should think about that each one of that will get handed to the mannequin. A whole lot of technical limitations forestall you from doing that anyway. That context is saved on the software layer. We retailer it, and we strive to determine the correct issues to move to the mannequin, passing as little as potential.
  • 25:17: You’re from Google. I do know that you simply measure, measure, measure. What are a few of the alerts you measure? 
  • 25:40: I take into consideration metrics a bit in a different way within the early levels. Metrics at first are nonobvious. You’ll get a whole lot of trial habits at first. It’s a bit tougher to know the preliminary person expertise from the uncooked metrics. There are some primary metrics that I care about—the speed at which individuals are capable of onboard. However so far as crossing the chasm (I consider product constructing as a sequence of chasms that by no means finish), you search for individuals who actually adore it, who rave about it; it’s a must to hearken to them. After which the individuals who used the product and hated it. If you hearken to them, you uncover that they anticipated it to do one thing and it didn’t. It allow them to down. You must hear to those two teams, after which you possibly can triangulate what the product appears prefer to the skin world. The factor I’m attempting to determine is much less “Is it successful?” however “Is the market prepared for it? Is the market prepared for one thing this bizarre?” Within the AI world, the fact is that you simply’re testing shopper readiness and wish, and the way they’re evolving collectively. We did this with NotebookLM. Once we confirmed it to college students, there was zero time between once they noticed it and once they understood it. That’s the primary chasm. Can you discover individuals who perceive what they assume it’s and really feel strongly about it?
  • 28:45: Now that you simply’re exterior of Google, what would you need the muse mannequin builders to deal with? What elements of those fashions would you prefer to see improved?
  • 29:20: We share a lot suggestions with the mannequin suppliers—I can present suggestions to all of the labs, not simply Google, and that’s been enjoyable. The universe of issues proper now’s fairly well-known. We haven’t touched the area the place we’re pushing for brand new issues but. We at all times attempt to drive down latency. It’s a dialog—you possibly can interrupt. There’s some primary habits there that the fashions can get higher at. Issues like tool-calling, making it higher and parallelizing it with voice mannequin synthesis. Even simply the range of voices, languages, and accents; that sounds primary, however it’s really fairly laborious. These prime three issues are fairly well-known, however it is going to take us by the remainder of the 12 months.
  • 30:48: And narrowing the hole between the cloud mannequin and the on-device mannequin.
  • 30:52: That’s fascinating too. At the moment we’re making a whole lot of progress on the smaller on-device fashions, however while you consider supporting an LLM and a voice mannequin on prime of it, it really will get a bit bit furry, the place most individuals would simply return to industrial fashions.
  • 31:26: What’s one prediction within the shopper AI area that you’d make that most individuals would discover shocking?
  • 31:37: Lots of people use AI for companionship, and never within the ways in which we think about. Nearly everybody I discuss to, the utility could be very private. There are a whole lot of work use instances. However the rising facet of AI is private. There’s much more space for discovery. For instance, I take advantage of ChatGPT as my operating coach. It ingests all of my operating knowledge and creates operating plans for me. The place would I slot that? It’s not productiveness, however it’s not my finest good friend; it’s simply my operating coach. An increasing number of individuals are doing these difficult private issues which might be nearer to companionship than enterprise use instances. 
  • 33:02: You had been presupposed to say Gemini!
  • 33:04: I really like the entire fashions. I’ve a use case for all of them. However all of us use all of the fashions. I don’t know anybody who solely makes use of one. 
  • 33:22: What you’re saying concerning the nonwork use instances is so true. I come throughout so many individuals who deal with chatbots as their associates. 
  • 33:36: I do it on a regular basis now. When you begin doing it, it’s loads stickier than the work use instances. I took my canine to get groomed, they usually wished me to add his rabies vaccine. So I began fascinated with how effectively it’s protected. I opened up ChatGPT, and spent eight minutes speaking about rabies. Individuals are changing into extra curious, and now there’s a right away outlet for that curiosity. It’s a lot enjoyable. There’s a lot alternative for us to proceed to discover that. 
  • 34:48: Doesn’t this point out that these fashions will get sticky over time? If I discuss to Gemini loads, why would I swap to ChatGPT?
  • 35:04: I agree. We see that now. I like Claude. I like Gemini. However I actually just like the ChatGPT app. As a result of the app is an efficient expertise, there’s no purpose for me to modify. I’ve talked to ChatGPT a lot that there’s no manner for me to port my knowledge. There’s knowledge lock-in.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles