HomePodcastAI Minds #068 | Samuel Pearton, Chief Marketing Officer at Polyhedra
Podcast
AI Minds The Podcast

AI Minds #068 | Samuel Pearton, Chief Marketing Officer at Polyhedra

AIMinds #068
Samuel Pearton
In this episode, Samuel Pearton shares how Polyhedra uses ZKML to build verifiable AI and protect models from tampering. In this episode, Samuel Pearton shares how Polyhedra uses ZKML to build verifiable AI and protect models from tampering. 
Subscribe to AIMinds🧠Stay up-to-date with the latest AI Apps and cutting-edge AI news.SubscribeBy submitting this form, you are agreeing to our Privacy Policy.
Share this article

Samuel Pearton, Chief Marketing Officer at Polyhedra. Polyhedra Network is building foundational infrastructure for trust and scalability in AI and blockchain systems to enable secure, verifiable, high-performance applications. Led by a world-class team of engineers, researchers and business leaders from institutions such as UC Berkeley, Stanford, and Tsinghua University, Polyhedra’s deep expertize in zero-knowledge proofs and distributed systems underpins the development of our technical solutions.

Samuel Pearton is the Chief Marketing Officer at Polyhedra, driving the future of intelligence through its pioneering, high-performance technology. Drawing on decades of experience in tech, global marketing, and cross-cultural social commerce, Samuel understands that trust, scalability, and verifiability are essential to AI and blockchain.

Before officially joining Polyhedra’s executive team in October 2024, he played a key advisory role as the company secured $20 million in strategic funding at a $1 billion valuation. Prior to Polyhedra, Samuel founded PressPlay, a social commerce and engagement platform that connected athletes and celebrities—including Stephen Curry—with China’s fan economy.

Listen to the episode on Spotify, Apple Podcast, Podcast addicts, Castbox. You can also watch this episode on YouTube.

In this episode of the AI Minds Podcast, Samuel Pearton, CMO of Polyhedra, shares his journey from musician to building the future of verifiable AI.

Samuel recounts launching a startup in China with celebrity partners like Stephen Curry and Cristiano Ronaldo, creating unique digital fan experiences.

He then dives into Polyhedra’s groundbreaking work with zkML—Zero Knowledge Machine Learning—a cryptographic method to ensure AI models act without tampering.

The conversation explores how verifiable AI can safeguard high-risk applications like finance, robotics, and autonomous agents from fraud and model drift.

Samuel and Demetrios unpack the urgent need for trust layers in AI, the future of agent marketplaces, and how Polyhedra aims to set the standard for AI integrity.

Listeners will gain insight into a fast-approaching future where cryptographic guardrails aren't optional—they're essential.

Show Notes:

00:00 From Unemployed Musician to Startup Founder

03:32 "Co-Founding Polyhedra with Berkeley Geniuses"

06:35 "Verifiable AI for Transparency"

13:29 AI Security and Fraud Prevention

15:59 "Verifiable AI: Complementary Consumer Protection"

More Quotes from Samuel:

Demetrios:

Welcome back to the AI Minds podcast. This is a podcast where we explore the companies of tomorrow. Built AI first. I'm your host Demetrios and this episode is, you guessed it, brought to you by Deepgram. The number one speech to text and text to speech API on the Internet today were trusted by the world's top conversational AI leaders, enterprises and startups alike, some of which you may have heard of like Spotify, Twilio, NASA and Citibank. I'm joined by the CMO of Polyhedra Samuel today. How you doing, dude?

Samuel Pearton:

I'm good man, thanks for having me, Demetrios.

Demetrios:

Well, you've got an incredible story which I think start in China, in Beijing. You moved there, you decided you wanted to start your own thing. Can you give me a bit of the background on what happened?

Samuel Pearton:

I was a musician out of work in Australia and trying to do some different things within the music industry and it didn't really work out for me. So I ended up in China and started seeing the opportunity mainly initially with across entertainment, but got more kind of interested in how the Chinese consumer but also the Chinese, Kol or celebrity was monetizing their IP through digital channels. So this is back in 2015 and it's also, it's also commonplace today. But the Chinese celebrity really, future tripped with creating their own brands, live streaming, digital rewards, mobile gaming, the rest of it. I got obsessed with it really and ended up starting my own startup called Press Play where our founding partner was Stefan Curry. He also worked with Stefan's all sponsors like Under Armour, JBL and multiple others, including his Chinese sponsors to kind of, create paid content experiences for fans money but can't buy opportunities and e commerce experiences. So that was it and ended up working with a bunch of other celebrities including Cristiano Ronaldo, Yana, Sanita Kupo and a lot of other, creators and influences and whatnot. So really cool experience craziness.

Samuel Pearton:

Had really no idea what I was doing, but was backed by some really cool people that I'm still very close with today. And very fortunate about that.

Demetrios:

So you had your time in the celebrity circles but then for your next act, it's almost like you went back to China and it navigated you to a new company that you're at right now.

Samuel Pearton:

So over Covid, I was wanting to do something again myself and it didn't play out so was introduced by another good friend of mine to a couple of guys, a couple of students from Berkeley, Jahang, Tian Chan and Abner, who would later become the three co founders of Polyhedra. And these guys are crazy smart. They raised the ranks through quantum and mathemathic competitions in China, did their undergrads at Jia Tong, the Stanford of China and ended up at Berkeley together. And just a really. I was fascinated by that story, that was so young, so accomplished in research and I was fascinated by their, by what drove them and their vision. It's very different from me in terms of that, the technology, what we built at press play, assisted the problem I was trying to solve. Whereas these guys are building, through their research and hundreds of hours of in the lab actually cooking up actually the infrastructure that everyone else builds on.

Samuel Pearton:

For me that was a real fascination of it'd be really fun. And it has been amazingly fun to kind of take more of a backseat support and run the go to market team, but be at that real, the coal face of innovation.

Demetrios:

So tell me a bit about what this infrastructure is.

Samuel Pearton:

So Polyhedra's major focus is verifiable AI is built off a structure that evolves around a technology called zkml, which stands for Zero knowledge machine learning. So zero knowledge machine learning is a cryptographic instrument that basically uses mathematical algorithms to prove that an AI model is operating correctly without human tampering. So it's a layer that's added or embedded into large language models to protect both businesses and consumers.

Demetrios:

And this is something that you would add as you're building out the apps, or it's something that you would expect someone like the big research lab to add in the large language model.

Samuel Pearton:

You'd hope both I think it's going to be a slower play, convincing, especially the closed giant such as OpenAI, to actually want to be transparent around what they're cooking and what their models are doing. We've already noticed model tampering of them, testing with better models and they're releasing to the public. And our technology can prove if whatever that model is acting is correctly, is correct or not aligning with what it should be doing via the code. So even for the creative arts, our technology works as a kind of recipe detection, and through to major other kind of areas such as finance and healthcare and robotics. We have certain product sets that match those. So to answer your question. That we think that verifiable AI in some type of form will be involved in at the large language model layer. But we also think that product, that builders will also focus and want to implement verifiable AI on the application layer.

Samuel Pearton:

And we're just not sure where the pressure comes from. Does it come from consumers? When a robot that's living in our house with us, does something to one of our children or it's all pretty bleak stuff. When I say the examples of what the dangers of technology without technology over the next 20 years without guardrails. But I think just being able to prove that in these especially high risk scenarios that the AI model is operating correctly is just so important.

Demetrios:

Now you mentioned there's no human tampering involved. Can you explain that a little more?

Samuel Pearton:

So it's all mathematics. So once our model's applied, basically our model creates a proof that is almost like a digital fingerprint or a digital signature that the model is acting correctly. So it's all a trustless system and you cannot tamper with our technology because it's all done through mathematical cryptography and it generally lives on chain, but it also can live off chain, but it has a ledger system that basically protects the consumer. So it could be like a red light, green light scenario on a robot. if you want a challenge, if you see a robot doing something and you want to test the model, hit the polyhedra button. This are all just future tripping theories and maybe, the system runs and it will tell you no, this model is acting correctly. It's tamper proof it's ready to go. And that's all done by the mathematical algorithm that we've created with the name zkml.

Demetrios:

And so you're protecting against humans trying to prompt inject different models or it's making sure that the model doesn't go off the rails inherently.

Samuel Pearton:

I think it's a blend of Both, but the core function is like making sure that the model is fundamentally acting, like the model should rather than, because we're already seeing, loads of cases of especially fraud across people tampering with different models, especially in the open source space at this point. But it's a problem already. But when as AI agents become such a bigger part of our lives, and we have digital twins and we've got all the twins doing a lot of things for us, being able to verify that the actions of those twins or verify the actions of the large language model inside of your robot. All of this type of stuff that haven't been, maliciously attacked, by hackers or by people with bad intentions or that the technology's just got a glitch and something's happened. it's just that kind of extra layer of guardrail. It's like a seatbelt in a car. it's something that you can still drive a car without it, but it's probably a smart idea to wear one.

Demetrios:

Especially in 2025. I think maybe if you would have asked me that. When I was growing up, I was adamantly against it. And now having two daughters, I very much understand the value of a seat belt and I don't go anywhere without it.

Samuel Pearton:

Especially on the autobahn up there, man.

Demetrios:

they go pretty fast around here. And so I like this analogy of having a seatbelt and being able to put it in as an extra layer to make sure that if I've got an agent and the agent is acting for me, I don't have moments where the agent is doing something that it shouldn't be doing.

Samuel Pearton:

Yeah.

Demetrios:

I guess the hard part in my mind is more that the agent inherently is going to do stuff sometimes that is wrong.

Samuel Pearton:

Yeah.

Demetrios:

And so the idea of guardrails is great, but that is also, if I'm understanding it correctly, a bit difficult to implement in practice when you're saying, all right, agent, go and buy me a plane ticket and it comes back with a hotel room instead or something like that. Is that the type of thing that you're seeing it combat against?

Samuel Pearton:

No, it's more around probably the more malicious activities say I think you may have seen, Anthropic just partnered with Visa to launch to issue AI agents credit Cards, Salesforce just opened up like their digital workers section. when it's more about when AI fundamentally takes over all these tasks doing our groceries for us and making transactions. Like that's we've been talking to multiple big kind of financial institutions who are, this is hot topics in Fortune 500 and corporates of, what is the solution against fraud, protection and governance over AI agents. So we wouldn't, our model wouldn't attack, our model wouldn't attack those, those kind of like harmless, I guess, mistakes. Our model would just be looking for, be looking for really malicious activity, illegal activity, fundamentally that, again my CTO explains and it is a head scratcher for us because a lot of the problems that we're trying to solve they don't actually really exist yet. but with the rapid rate of how quickly AI agents in particular are moving, we feel that people are going to want to have an ability. And it's not like we're wanting to enforce our model across every single large language model, but one of our products that we're working on is like the fingerprint, so if you saw your robot acting crazy or you wanted to put that extra layer of protection on your, or AI agent that controls your credit card, you can access this, you can access this technology to do so or you can go to, we are going to launch a verifiable AI agent marketplace. So that's a work, and that's a long term kind of work in project, in process.

Samuel Pearton:

But you're going to be able to work on verifiable apps. We know that every AI agent on there or AI outcome is being verified by say our technology or you can hopefully and we're in discussions, we're talking to a lot of different researchers that different especially large language models about what's a way to kind of start testing things or implement our technology into their systems. But that would be more particular verticals or particular kind of consumer friendly attributes. It wouldn't be it definitely our goal is not to have like a blanket cover and replace our model with what ChatGPT and these other behemoths are doing. it's just meant to be a complementary element that fundamentally does protect the end consumer.

Demetrios:

How did the founders get the inspiration around this man?

Samuel Pearton:

I actually did our internal podcast yesterday with one of the. With our chief scientists, Jahad Jung and you have to remember these guys are all like, 27, 28 now. And they wrote the paper for ZKML in 2019. Before, I wasn't thinking about AI then I wasn't thinking about, I think we would.

Samuel Pearton:

Stephen Curry. We were looking at doing some fun, AI little kind a mini Stefan bot thing we had a few kind of different things there, but it wasn't, front of mind. And I was living in San Francisco, so the fact that these guys, that these guys have been so passionate about trying to solve problems. That I haven't seen that other people haven't been able to see the majority of the world to me. That is so fascinating. I much more prefer listening to them talk than me, I'm grateful to be here with you saying that, Demetrios.

Demetrios:

Excellent, dude.

Samuel Pearton:

I just wanted to say a shout out to Deepgram are working with Polyhedra across our marketplace, we're building out some fun, verifiable voice elements, plus we're distributing the API to our 100 plus community of blockchain builders. So they've been awesome to work with, and I'm really grateful for you to have me on the show today, Demetrios.

Hosted by

Demetrios Brinkmann

Host, AI Minds

Demetrios founded the largest community dealing with producitonizing AI and ML models.
In April 2020, he fell into leading the MLOps community (more than 75k ML practitioners come together to learn and share experiences), which aims to bring clarity around the operational side of Machine Learning and AI. Since diving into the ML/AI world, he has become fascinated by Voice AI agents and is exploring the technical challenges that come with creating them.

Samuel Pearton

Guest

Check out these Resources

View All