Can AI “get a read” on you?
Sam: Welcome to the Deepgram Voice of the Future podcast, aka Our Favorite Nerds. At Deepgram, we’re obsessed with voice, and this podcast is our exploration of all the exciting emerging things happening in the world of voice technology. I’m your host today, Sam Zegas, VP of Operations at Deepgram, and our guest is Scott Sandland, CEO of Cyrano.ai. Scott, thanks for joining us.
Scott: Thanks for having me.
Sam: Great. So to get us started, why don’t you tell us a little bit about yourself? And in particular, what kind of nerd are you?
Scott: What kind of nerd am I? I think I’m two kinds of nerd at once, which is I’m very proud of. So My background is, I’m a hypnotherapist. So that’s one kind of nerd I am. I think people who get into hypnosis and hypnotherapy tend to be one type of nerd. And I am definitely that type of nerd. And that’s my background. Is language influence subliminal messaging systems, all that. And then the other side of my nerd is, you know, a kind of tech tinker. So I wouldn’t consider myself a technical founder, but I can hold my own at the whiteboard.
Sam: That’s great. Two very distinct kinds of nerd. We love that. And now you’re at Cyrano. And for those of you who wanna check them out online while we’re having this chat, it’s spelled c-y-r-a-n-o-a-i. So, Scott, tell us about the company and what you guys do.
Scott: Sure. So what we do is strategic linguistic analysis. So we’re a lot of companies are aware of sentiment analysis.And, you know, the idea of just sort of this basically binary thumbs up, thumbs down, how it went. We look at conversations much more strategically and we look for deeper meanings and patterns and tells in the communication. and then turn that into strategy suggestions for the humans in the conversation. Got it. I can definitely relate to that. I myself am am a linguistic nerd, so near dear to my heart.
Sam: Yeah. Tell me a little bit more about what pain you’re trying to address with Cyrano and how does the solution actually solve it?
Scott: Sure. So it’s the short answer is soft skills. And giving soft skills and strategy to people and systems that don’t currently have that. You know, chatbots are notorious for not having soft skills. But really, the origin story of of this was I was the CEO of a mental health clinic and also had a private practice. And I watched suicide become the second leading cause of death for people under the age of twenty four in America. And it’s these crazy statistics where for people in their teens, twenty percent of deaths, twenty two percent of deaths come from suicide. And so when you look at these numbers and you’re like, wait a minute, twenty percent of the time when someone dies, it’s because they killed themselves on purpose. And that doesn’t include the people who attempted suicide where there’s three three thousand high school students attempt suicide every day. Wow. I mean, you you see kids who get over medicated or fall off track or or run into problems, and my career was working with those people and helping them pick up the pieces. And it was a great job, and and my team and I enjoyed it a lot. There’s a lot of satisfaction. There’s a lot of burnout in that too. It’s hard. And I just wanted to build a system that could interact with those kids and get in front of it and be on the prevention side. And so I decided I needed to build a system that had empathy and strategy in its conversation, and that’s where Cyrano came from.
Sam: That’s an amazing amazing founding story. how do you think companies have been trying to tackle this problem in the past? I mean, like, is it are you starting from a point where you basically think machines are completely unable to to participate on a level of empathy? Or, like, where are we at today?
Scott: So when we were building it, and I was talking to all the engineers, they really wanted if-then statements. And and I understand why, but language doesn’t have that, especially English. You know, English is a notoriously exception based language. And there’s all this nuance that goes into, you know, whether not just sarcasm, but the nuance is in the subtleties of the tells. And so we really wanted to build back. Like I said, there’s it’s so hard to do language analysis that what a lot of the standard NLP, you know, bird or hugging face or any of these, like, great established things. They looked at entities intense and sentiment. That’s really sort of the three buckets they were putting things in. And so you go an entity. Okay. That’s a noun. And an intent is functionally a verb. It’s a little bit more complicated than that. And sometimes the verb can be an entity, but whatever. And then there’s the the sentiment, which is did they like it or not? That’s how many stars that conversation got on Amazon. And and so those are the things that got measured, but there’s a lot of things that don’t get measured in there. And a lot of times those endeavors will just take those words and call them like junk words and just throw them out. And sometimes it’s seventy percent of the sentence. that just gets thrown out because I didn’t know which one of these three buckets to put them in. And so we decided to look at a lot of those words. and say, that’s where the communication is happening. And that’s where the tells are because people are paying attention to their nouns and verbs. but the little three and four letter words in the sentences provide a lot of insight into what’s going on.
Sam: That’s fascinating. You know, language is such a a nuanced thing that humans are so good at at picking apart all of the little complexities of it, and yet even humans misunderstand each other. So to try to teach machines how to do something in that really complex problem space is is a pretty cool project.
Scott: Yeah. I mean, my wife and I love each other very much. We have a great marriage. But last night, we had a disagreement on a sentence. And, you know, it’s one of those things where you take for granted how great we are at parsing and inferring and all these things that have to be done just to do what you and I just did five times. And when a little piece of it breaks in an established relationship, you can fix it. But when that thing breaks in a chat system or something like that, everything goes sideways real fast.
Sam: Yeah. Makes perfect sense. So who’s your ideal customer today then? And and why is that the case?
Scott: Honestly, our ideal customer is Deepgram. Like, yeah, you would like and I and I really mean that, which is why I’ve loved working with you guys. We love it. about a year, a little bit less that we’ve been doing stuff with you guys. But the way we work is we’re an API first company. So an API that can plug into where important conversations are being handled. And whether that means that we’re partnering with the end user who’s actually holding those conversations, or we’re working with platforms that empower those conversations. We have an API that just fits in that conversational layer and analyzes the conversation and produces bullet point advice for the the frontline people having the actual conversation?
Sam: That’s really cool. and we’ll get a little bit more into into the actual product in just a few minutes here. But before we go there, you know, the project or the problem that you’re working on creates a a really interesting set of work for you. And I wanna dig deeper into why that’s so challenging. So in the last year, Deepgram itself has done some cool work with sentiment analysis, but I know firsthand Sentiment is really difficult to measure. And not just sentiment, but emotion and and some of these other things that you’ve talked about. It it’s not just difficult for machines to measure. It’s actually difficult for humans to measure in a way that’s consistent from person to person.
Scott: So I mean, you can — Okay. — I mean, you can have those conversations where, like, you can focus group a conversation where you could take a transcript or cording or whatever it is, and then focus group with thirty people how to go. And there will be a bunch of different opinions. And then one of them or a group of them are now going to codify that and weigh scores somehow and then use that as a training model. So it’s super hard. And there’s if you’re just looking at sentiment, there’s diminishing returns on increasing accuracy. So it really starts to plateau and there’s a an eighty twenty good enough kind of thing that happens in the engineering rooms, which is totally reasonable where they’re they’re looking at ROI, which is why we care less about sentiment and more about where a person is in a decision making process. Where we can say this person’s open and curious and figuring things out here and needs education, or this person is close minded here, or this person is being completely irrational here which leads to impulsivity and regret potentially. And so seeing those different mental states I think are actually more useful when you’re thinking about the so what of it. And and really getting back and this was the thing that we realized pretty early on was figuring out that you need to be answering for so what? Because if you just give us someone a graph that says, And look, their their f one is this, their p two, like, you know, this wonderful data science project. that has, you know, you know, a signed wave and and they go great. And then they go, so what? that’s where we need to start and work backwards from there. And so for us, we said the so what is so what should I do about it? And so we built our system that could say, what kind of advice does a great salesperson give or a great therapist give to an intern? and then figure out what they’re doing to make that happen. So it’s very much an expert system.
Sam: That’s really fascinating. And I I can see how your experience in therapy is informing the way that you design this this product? Because I think a lot of people who have tried sentiment and emotion detection type product so far have probably had experiences with the fact that they’re they’re limited. They they don’t necessarily give you something that’s clearly actionable. it sounds like what you’re saying is that to make sentiments in the motion detection and more usable in the next few years, we really need to shift into sort of this this so what the action that comes out of some of those insights.
Scott: Yeah. Totally. And and you’re right. Like, I spent nineteen, twenty years having important uncomfortable conversations with somebody who didn’t wanna be in the room with me. And and that and that means I had I don’t know. I I had ten thousand first sessions. And your first session, you’re building trust, you’re building a relationship, you’re creating a safe environment, you’re You’re doing all these things and coming up with KPIs. You don’t call them KPIs because it’s therapy, but you’re doing all that. And so I got really good at it. And and that doesn’t make me special. You know, all my colleagues get good at that. And I just haven’t used one of those people who who did the ten thousand hours thing. And I would look at a conversation as okay, what equity am I building? And what equity am I about to have to spend to buy the next thing? And sometimes I’m building attention and spending attention, sometimes I’m getting them to spend trust so that we can get somewhere. And and looking at all the different currencies that are in a conversation, and figuring out the exchange rates of all of those. And so deeply built into Cyrano is this idea that there are multiple currencies in a conversation that are being exchanged, and you’ve gotta keep track of those. That that mental framework is is fascinating. the idea that there’s an economy of different sorts of exchanges happening that needs to be or can be actively managed or at least measure I know that you guys measure a a couple of different axes like values and commitments and communication.
Sam: Can you tell me a little bit more about that?
Scott: Sure. So our our core taxonomy is is sort of really three taxonomies. One is their values. And what they’re prioritizing real time, which is easy to misconstrue with personality profiling. But the idea of personality profile is more like another identity statement that’s like permanent, you are an extrovert, which I fundamentally disagree with. I think people are much more dynamic than that. and sort of plastic in in how they fit in the world. But we measure what matters to them right now. Do they care more about a relationship Do they care more about law and order and rule following? Do they care more about their own ego? What matters to them right now? in the context of this conversation and decision making process. We also measure where they are in that decision making process. So we we got some stuff from motivational interviewing. And we we really played with it and made it our own enough that we could create our own language model around it but we look at the the process of going from desire to commitment. And and that is a multistage process that every person goes through all the time. Mhmm. Sometimes it’s very easy because the steaks are low and it’s do you want a piece of gum? Yeah. I want a piece of gum. Therefore, I’m gonna have one. But when it’s a mortgage, you know, each one of those stages is more well defined. And so we we measure each of those we measure the the the priorities. And then we also measure the person’s learning and communication style. So if a person is more visually oriented or a person is more auditory as an example. Like, I’m very auditory. I’m I’m good with words and and I’m I can get a sense of people’s tonality and things like that. I’m terrible visually. And anyone who’s seen me make a PowerPoint or a UX mock up and knows that, like, I’m in the last place there. And that means if you wanna convince me of things, We can do this over the phone. We can have a conversation. A graph does nothing for me. Mhmm. But a sentence does. So that kind of thing. So we measure those. We also have a bunch of other ones that we do that are that are more industry specific for different deployments. but those represent seventeen dimensions. And so those dimensions on different axes allow us to create really interesting triangulation of where a person is in sort of this and dimensional cloud. And that’s how we can turn into a strategy.
Sam: That’s so interesting. So you’re analyzing the the the speech that that people are are doing in a conversation and in measuring all these different aspects of it to try to build a picture of how this person could be influenced potentially or Yeah. You know, where where they might be ready to move from from one belief to another. I think that the the bottom line that I’m getting here is that AI is really getting happy enough to, like, quote unquote, get a read on us. Do you think that some people are worried by that development?
Scott: They should be, and you’re right. It is Cyrano can get a read on you. No question. And we’re not exclusive. I mean, specifically our patent is how likely a person is to do the thing they’re talking about, and then how to respond to them to increase or decrease that likelihood of follow through. And so that’s influenced. So the idea that machines are becoming influential and optimizing for poorly defined outcomes is a problem. And I think if people want to be worried about this, which I understand why they would be, the thing to be worried about is poorly defined outcomes in the training and making sure that we’re, you know, not letting all the toothpaste out of the tube before we really understand what we’re doing. So for us internally, we spent a lot of time paying attention to unforeseen consequences and not moving fast and breaking things. Yeah. You know, we live in an age where a lot of people feel manipulated by algorithms. You know, there’s there’s a lot of misinformation on social media in particular. I could I could see some dystopian future in which this sort of technology proliferates and it becomes part of that problem. Yeah. But, you know, I I guess it’s just a there’s a huge ecosystem out there Sometimes it’s hard to control the way that certain technologies are gonna be used once they reach their maturity. Yeah. And, you know, there’s There’s an aspect where a handful of the things that my team and I’ve built are a Black Mirror episode. Like, it’s really scary where it’s not just like the the replica or the affectiva, which are which are cool. where they’re just like, okay, we can mimic a conversation or, oh, we can look at your pupil dilation. It’s you know, using replicas, an example, or those chatbots, and and replicas are good one. Mhmm. They’re just trying to hit the ball back. Right? And and that’s what chat systems do these days, is they just hit the ball back. And they usually use question marks to control the conversation. to stay on the rails. But once you graduate from that to what our system is is all about, which is is not just How do we hit the ball back? But how do we win this point? How do we win this chess game? How do we win this? And and what does winning mean? Well, how do we define that? As soon as you start getting into all of that, it’s a completely different situation. and then and then you take the, you know, the bad actors approach to it where you say, instead of just posting bad propaganda, what if it could actively engage in a manipulative conversation with your, you know, crazy ant. That’s really dangerous. Yeah. So figuring out the ethics of this, and the frameworks for this is something that we move slowly on.
Sam: Yeah. It it’s it’s fascinating, you know, with with the great power of new technologies comes great responsibility, and I’m glad to hear you guys have an eye towards some of those risks. But on the other hand, as speech technology improves, companies and organizations in every sector are gonna have to think about how these developments are gonna affect their product and affect the the competitive landscape that they’re in. So I’m wondering for companies that don’t start adopting new voice technologies, how do you expect that they’ll be left behind?
Scott: I think the the way to think of this question is to think about what’s computing look like in five years or ten years. Mhmm. because there is an incubation and there is, you know, a development process of this. And in five to ten years from now, we’re doing much more ambient computing. I think we’re we’re at the very fuzzy front end of web three point o and metaverse. And whether actually meta slash Facebook makes this or it’s something independent of that. we are going to be moving to a much more ARVR hybrid workspace. And all of that means Keyboards become less useful and less convenient. Mhmm. Keyboards are great in cubicles, but less so in VR. And so having systems that can get as close to real time transcription and then taking that transcription and and having it be accurate, obviously, and taking that transcription, and then putting it into systems, not just for summary and action items. but for truly understanding each other. And what if HR could say, hey, look, you know, Sam is being managed by this person and Sam’s great, but he’s not doing the best he can because this person’s the wrong boss for Sam. We need to either intervene and talk to Sam’s manager to help manage Sam better or move Sam so that Sam is we’re getting the most out of his potential. And so having that be a passive system that’s not actually paying attention to, like, not recording that and taking the the words and giving them to the boss, but just really looking at the graphs, looking at the scores, looking at the analyses, and saying, we can do better for Sam here. And you could do that with teachers, you could do that with counselor You can do that with doctors. You can do that with physical therapists for rehab. You can do it with, you know, bosses and sales teams. And so you actually have this dynamic ability to optimize at the conversational level. And companies that aren’t paying attention to that, are are gonna miss. It’s I look at it like sports and match ups make things interesting, and that’s why we play the games. And oh, I don’t know, fifty years ago, people didn’t understand what stats mattered. And now you look like the money ball and like the advanced analytics we’re getting into advanced analytics for conversation.
Sam: Yeah. I couldn’t agree with you more. I think my attitude is that the the the speech dataset that exists all around us that we’re using and building right now in this conversation is the largest untapped data source out there in the world. You know, like humans are we love to communicate by language. It’s so natural and and easy for us And yet, there are so many barriers between the way that we use language and the way that machines are able to gain insights from languages. And I think as we break that barrier down, we’re gonna see just a huge amount of growth and opportunity in terms of what’s possible and what what what we can measure and what we can build products around and what we can build experiences That’s part of why I think that speech technology is such an exciting space right now.
Scott: Yeah. I mean, you and I are are similar nerds in that we appreciate humans as social apes. and and language being our defining characteristic. And we are a words first or story first culture. Mhmm. And machines are a math first entity. Yeah. You know, existence, whatever that is. I’ll call it a species for the sake of this right now. Sure. And so so far humans have had to learn robot language. We’ve had to learn Python or Java or c or whatever we’ve had to learn so that we could interface with the computers. And now the inverse is starting to happen where the computers are starting to learn our language. Right. And once they get great at it, once they get eighty percent as good at it as we are, they will get better than we are at it pretty quickly. You know, that’s just sort of how machine learning works. and we’re going to get to a spot where computers understand us better than we understand them. And then we’re gonna get to a spot where computers understand us better than we understand ourselves. And it’s a really exciting time to be doing this kind of work, and it goes back to the importance of safeguards.
Sam: It absolutely does. Yeah. There’s so much exciting opportunity there and a lot of risk. And one of the trends that I think that you’re highlighting there is just the personalization of of speech technology where the speech tech that most people are familiar with today, like Siri or Alexa or something like that — Sure. — is it’s a one size fits all kind of a model. And if you don’t speak sort of in a standard way or in a way that someone explicitly built a general model for It’s not gonna understand me right. And I think that as we get deeper into machine learning for speech technology, we’re gonna see a whole lot of really interesting personalized understanding and analysis that is really gonna push machines into that area of expertise.
Scott: Yeah. Yeah. And and I think that there’s people who know each other really well can say more with a sentence than other people can who don’t know each other with a paragraph. You know, like my best friend, I can give him a look, and he knows everything I mean. You know, my wife can say two words to me, and that’s more than you saying five sentences to me because, you know, there’s there’s an internal understanding of each other. And what if you had a Siri that had that level of understanding of you and had clearly defined outcomes for optimizing for your best scenarios and a better view, however we wanted to find that. So if you had a truly personal computer, that understood your personality, that understood you at a personal level. That’s, you know, the the big deal executives have an amazing executive assistant who understands that about them. And when they say put this on my calendar, they can gauge the, you know, is this going in in pencil or pen. And and they know what’s flexible and what isn’t — Yep. — because of tells because of little things that they know how to pick up on because that’s their job. Why can’t Alexa do that? why can’t Google Assistant do that yet? And and that’s where we’re headed. Mhmm. And once we have that, we that takes us from you know, DOS to Windows where we actually can customize the interface so that it works the way we want it to.
Sam: Yeah. That’s fascinating. Really cool stuff. switching to a more technical gear for a minute, tell me a little bit about where AI fits into the solution that you built.
Scott: Sure. we built our system so that the deployment is computationally lightweight, you know, really easy, really light just words. But the training is is really where the AI comes from. So we actually built out our own AI stack from scratch to build out our system, where we measure things at the word utterance and conversational level. Mhmm. And so we’re always tracking all three of those things in our training. And so you can you can look at a single word, which is what a lot of NLP will do. And then also look at the utterance which some other systems do, but often one system doesn’t do both of those things. And then we added a conversational layer So as the conversation progresses, we looked at, you know, word one, phrase one. And then by the time you get to, like, fourteen, the sentence or is becoming less important and the overall pattern in the conversation becomes more interesting. So we built an AI that looks at each sentence as sort of a game within a tournament. Mhmm. And so we looked at it really like poker. where you can have individual betting rounds and then individual, you know, hands and then the tournament overall. And so there was a AI system that a card email student made called Libertus, and it was winning poker tournaments. And we really looked at how he, and I’m blanking the name of the student at the time of his thesis project, how he looked at game within the game, in his training. And and we really did the same thing with our AI for training out our models.
Sam: That’s a really interesting approach thinking about how a game like structure or a tournament like structure could help to build, like, an an ever expanding picture of what’s happening over the course of a conversation. I’m curious, what are the datasets look like for that kind of training? Like, how do you tag motivation?
Scott: So there were a couple of things that we did. My cofounder is a different kind of nerd than either of us. Okay. I mean, he is a linguist, so you guys have that in common. But he is a Star Trek nerd. And so what we did was the first thing we did was we grabbed every Star Trek script because they’re very archetypal characters. you know, Spock and Data are the same character. And and so you can play this game where you can put people into buckets and start figuring out motives and and all that. And then you can do the same thing where you’re breaking down scenes. And so you can break down the motives of characters in a scene. And I have a degree in film, actually, as it turns out. So my ability to break down motive in scene structure and and, you know, script writing and Dan’s ability to as a sociologist and linguist he’s looking at it on that side and being the star Star Trek Geek. So we started there, then went to more TV shows and movies, and then went and did retail automotive, grabbed a good data set from retail automotive. So now it’s something we have outcomes. and we can train on outcomes. And then the largest dataset we got was we we took out our our tool put it on Zoom, made it free for about eighteen months, and got thirty thousand users using our tool and giving us feedback on the output our system was giving. So the first two were sort of internal training, and then we deployed the advice coming out of our system to thirty thousand people. And then we got refinements on the accuracy and usefulness of that advice from those thirty thousand users. It’s a really cool approach. I also am a bit of a Star Trek nerd. But more on the Voyager side, and the archetypical character that relates to Spakken data is seven of nine who’s a fantastic character for really. NAND knows exactly what you mean. I have no idea what that is. Seven of nine means just genuinely nothing to
Sam: That’s great. That’s different different nerds. Love that Star Trek was part of the story there. It’s cool. Yeah. So and this is actually you’re you’re talk talking really about how CRO and Deepgram fit together in the same ecosystem here because you, of course, run on transcripts of conversations that need to be as accurate as possible as your input to be able to perform your analysis on things like commitment and communication style. Is that right?
Scott: Yeah. Completely. I mean, it’s it’s lazy to say, but it is a garbage in garbage out kind of situation. And When we first got started, we wondered if we would do our own transcription. And it took us probably three hours to realize that we would not. And it’s just it’s its own big endeavor. And you need to get that right. And and we looked at some really bad transcription. The most our favorite sentence, there were two things that really stuck out. One was we were looking at a transcript, and it said microns, olives. And we’re like, what the hell is a microns, olives? And it was it was Mike runs all of this department in decision making process. And so you look at that, you’re like, everything in that is useless because at the transcription level, it wasn’t good enough. Yeah. And so, yeah, partnering with Deepgram is is an important thing for us.
Sam: Yeah. That would be an accuracy of zero percent. You hate to see it.
Scott: Yeah. Yeah. It’s inaccuracy. And then the other one, I actually talked about with your CTO the other day. When we were looking at retail automotive, we found a word from the rural south, and it had two apostrophes in it. and it was ya’lld’ve, which is you all would have as one word. And all those words get measured in our system in different ways and then figuring out how to take you all of and turn it into multiple vectors in, you know, like a word to vector thing. It was it was a flip the table kind of moment where you’re like, you can’t be serious. you can’t have ya’lld’ve. So, yeah, that’s why finding the right transcription partners is critical.
Sam: Yeah. You’re you’re you’re really homing in on something that is near and dear to my heart at Deepgram, which is thinking about how we build models that are able to recognize different dialects, different dialects, and — Yeah. — making some strategic decisions about what kind of output we want to put out in the world? Is it something that standardized into a very formal literary kind of English? Or are there circumstances when we want to be able to output something that reflects the idiosyncrasies of a certain dialect. And then there are use cases in which either of those two things could be valuable. And it’s an important conversation where you can you can skew that conversation real easily where you say we’re optimizing for this deployment. Mhmm. is also we’re biasing against this group because we want that group to have the right experience because they’re our target customer. So there’s really interesting conversations that you guys have to have at a sociological level and an ethical level that, I think, get underrepresented, and people just don’t get it. It’s it’s hard.
There’s a a lot of conversation that happens there. But it’s an area that I’m really excited for us to tackle in the next couple of years. But to bring this full circle to something that you mentioned right up at the top, you know, we’ve talked about sort of the tech stack underlying it and how you’re measuring different aspects of the way a conversation is flowing. So bring us back around to, like, what is the actual user see as the output? What do you get if you’re a customer servers here or no? If you are a customer here or no, you get bullet point advice. You get sentences. You get contextual advice on, here’s how to resolve the problem with this person. Here’s how to mentor this person. Here’s how to collaborate with this person. Here’s how to show them an open house. Here’s the kind of collateral that will influence them in a follow-up. It’s very actionable. And that was the key for us was we wanted the lowest paid employee to be able to take this and with just about no onboarding, be able to take it or automate it onboarding to take it and understand what to do. So it’s not you know, graphs and charts that need to be interpreted. And it’s not your talk time was this and their talk time was that. It’s a bullet point on you are a kind of person who is expressive and and is great at talking. Be careful to not, you know, suck up all the air in the room. You need to give them time to be quiet. Mhmm. So it’s it’s really that actionable. And And our system has over fifty thousand different pieces of advice that it can give as like individual bullet points as observations. And so when you’re getting a report on a person, you’re getting a couple dozen bullet points. And that combination probably won’t ever happen again just because of all the permutations. Right. And so you’re seeing this evolving every time you get an email from the person, every time you have a Zoom conversation with them, it’s automatically taking that and ingesting it and including that in an aggregating you know, fingerprint is the way we think of it on that person. And so your fingerprint on each person is updating. So you have Here’s my calendar of the day and I can click on each person, then they can say, this is how to approach the conversation. So you’re actually getting the advice before the call, not after the call. You’re not getting some sentiment score. We’re like, hey, they were pissed. And you’re like, why? And they’re like, oh, we don’t know. We’re sentiment analysis. Right? Having a thing that says, going into the call, here’s the three, do this, not that that you need to pay attention to. So you can, like, study and like cram and prepare for the call at a person to person level rather than a subject matter. level. And I I think that’s going back to poker, it’s playing the person, not playing the cards. And that’s what the people who are best at this know how to do.
Sam: Yeah. I I gotta say you guys have an awesome product. It’s really interesting, and it’s there’s such a a big scope to what you have already done. And I know that you’ve got a lot of cool plans coming up in the future. too. So congrats to you. Is there anything big coming up at Cyrano that you wanna be make people aware of?
Scott: The the things that I’m most excited about, I can’t talk about yet. But that’s — Right. — you know, always the case. We’ll be with you guys at Project Voice coming up. I don’t know when this airs versus when we’ll be at Project Voice, but that’s always fun. It’s a bunch of voice tech people that I’m excited about. and we’ve got a couple special deployments or really customizing our output for a specific customer that when that gets out there, it’s gonna be a really fun thing to show off. Awesome. Well, we can’t wait to see there. And excited for all the headlines that I’m sure we’ll see about you guys in the near future.
Sam: Yeah. Where we go here, I wanna take a minute to remind everyone of how far we’ve come with technology just even in our lifetime. So I’m gonna ask you to explain the piece of outdated technology like you went to a kid who was born after twenty ten. So this person is about ten years old. Okay. Their whole life on smartphones. And I want you to explain to them how a pager used to work and how people use to use pictures.
Scott: Okay. So my son’s six, so I can do this. Okay. So a long time ago, we didn’t have cell phones. And so if you call the person you had to call their house, and if you called their house, and they weren’t there, you didn’t know what to do next. So they invented a thing that you could carry in your pocket. And if I wanted to talk to you and you weren’t home, I called that thing and it would beep. And you would look at it and it would say my phone number and you had to know it was my phone number and remember that and then you had to go find another phone to call me so we could talk. And if you are outside, you would have to use a pay phone. And what a pay phone is is a thing that was tied to a wall where you could make phone calls from, and it worked on quarters So humans carried in their pocket where now your cell phone is, they had a a beeping thing that would tell me who was calling me, and then pocketful of quarters so they could use the wall phone to call me back to find out what I want because there was no such thing as text messaging. and that was a big improvement that we were all really excited about.
Sam: I remember being really excited about that as I paged my mom six times in the afternoon, and I’m sure she was like, what is going on?
Scott: Well, ten year olds, you heard it here first. Scott, thanks so much for being with us on our favorite nerds. Great talking to you.
Scott: Love it.
Sam: So do I have all our listeners out there? Thanks for tuning in. Come check us out for more info. either on Deepgram or on Cyrano. Cyrano is at cyrano dot a i. And, of course, you can find Deepgram at deepgram dot com or at Deepgram AI on all of our socials. with that, we’re out. Catch you next time.