Alex built the original Snapchat filters-- and sold his company to Snap for $166M. 

Then he left to start Higgsfield. The company just raised a $50M Series A to help brands create AI-generated video ads at scale. 

We go deep on why he thinks Adobe is in trouble, how top advertisers are already producing 10,000+ ad creatives a year, and why the companies winning in AI video aren't building foundation models.

Why You Should Listen

  • Why consumer AI apps are a trap (and what to build instead)
  • How to drive early growth
  • The economics of AI-generated video
  • How to know when to pivot away from traction that has no long term

Keywords

startup podcast, startup podcast for founders, AI video generation, generative AI startup, social media marketing AI, B2B SaaS growth, founder pivot, AI startup fundraising, creator marketing, product market fit

00:00:00 Intro

00:06:29 Selling to Snap and Working With Evan Spiegel for Four Years

00:08:28 The Origin Story of HiggsField

00:17:47 The Real Use Cases for GenAI Video Today

00:27:26 The First Product and Why They Pivoted Away From Consumer

00:29:08 The $10 Billion Short Form Drama Market Nobody Talks About

00:33:26 Going All In on Social Media Advertising

00:41:16 When He Knew He Had Product Market Fit








Retry

Send me a message to let me know what you think!

00:00 - Intro

06:29 - Selling to Snap and Working With Evan Spiegel for Four Years

08:27 - The Origin Story of HiggsField

17:47 - The Real Use Cases for GenAI Video Today

27:24 - The First Product and Why They Pivoted Away From Consumer

29:07 - The $10 Billion Short Form Drama Market Nobody Talks About

33:27 - Going All In on Social Media Advertising

41:18 - When He Knew He Had Product Market Fit

Alex Mashrabov (00:00:00) :
What we have seen with, let's say, MrBeast. Because, I mean, he's the most vocal person about that. He's the guy who mastered all this AB testing more than anyone else. He's AB testing AI-generated thumbnails. He's AB testing first ten seconds of the video to increase the conversion rates. There are those companies who want to be able, like MrBeast. They want to AB test how the actor should look like in the video. HiggsField empowers data driven experimentation as well. It's really exciting to see that some companies are willing to take a risk. We expect maybe most of the content on socials will be generated with AI and then it's going to change the dynamics. It's just going to change the dynamics of what's the CPM, what's CPA, and so on. Yeah, the product market fit is, my mother called me and she asked me, did you see that Madonna posted? But I think the reality is that. we felt like we get true organic reach. I don't want to do clickbait for you, but it feels to me that Adobe is in a huge trouble.

Previous Guests (00:00:57) :
That's Product Market Fit. Product Market Fit. Product Market Fit. I called it the Product Market Fit question. Product Market Fit. Product Market Fit. Product Market Fit. Product Market Fit. I mean, the name of the show is Product Market Fit.

Pablo Srugo (00:01:09) :
Do you think the product market fit show, has product market fit? Cause if you do, then there's something you just have to do. You have to take out your phone. You have to leave the show five stars. It lets us reach more founders and it lets us get better guests, thank you. Alex, welcome to the show.

Alex Mashrabov (00:01:26) :
It's a pleasure for me to be here.

Pablo Srugo (00:01:28) :
I'm excited to chat with you. I mean, you're kind of a classic AI company is not the way to say. But it's kind of like prototypical successful early stage AI company, right? Raise the seed round $8 million a year later, just raise the $50 million Series A. These are like, $50 million, dude, not long ago was. I mean, it was a Series C but certainly a Series B. A good, healthy Series B was $50 million. So it's pretty crazy what's happening in AI and you're in the AI video space of which there's a few companies. So I'm curious to kind of dive into, what it is that you're doing, what you're seeing, where you're focusing on. Anyways, we'll get to all of that. But maybe as a first question, give me just a little bit of your background. Maybe the years before you started HiggsField, what were you up to?

Alex Mashrabov (00:02:11) :
I work in the video AI space for the last ten years. So I graduated in 2015. I had some experience with neural nets back in Yandex, and since then, I work on image and video AI. So I sold my previous company to Snap. In my previous company we built technology to make face filters run real time on mobile phones.

Pablo Srugo (00:02:32) :
Are you responsible for the filters that are on Snapchat today?

Alex Mashrabov (00:02:37) :
Some.

Pablo Srugo (00:02:37) :
That's pretty crazy, man.

Alex Mashrabov (00:02:38) :
Meaningful part of technology for sure. As we enabled this new type of experiences, where every pixel is generated with AI. Before that, it wasn't possible. The world like this, you know, these many years. They were done through just 3D tracking, not through end to end AI generation. So we enabled these net new capabilities and ByteDance, and Instagram. They didn't have those. So it contributed meaningfully to Snap daily active users growth and in the same time. I was always thinking how this technology can actually empower advertisers.

Pablo Srugo (00:03:11) :
We'll move to all that but, actually the filters are just such a common thing these days and it's one of these never would have thought it would have blown up the way it did. People love filters, they love to use this stuff. Maybe just tell me a little bit more about that first company, how did you end up doing AI video filters? And then how do you? Because it's not an easy product on its own. It's great technology, but then how do you actually go and get traction? I'm just curious how that all happened.

Alex Mashrabov (00:03:33) :
Look, I think the face filters really were completely novel. They say, I generate face filters. It was something absolutely novel. Although when I look in China, the state of face detection. It was really state of the art, even back 2016. I think in China they had these surveillance systems everywhere and the reason why I bring this up. Is the development of these surveillance systems and face detection overall. Requires to train those models to differentiate from real humans and synthetic media. So actually, a lot of surveillance system innovation coming from China. Propelled synthetic media developments across the world. So, and I think in China, there were very sophisticated deepfake systems in 2018. Which is, so you can't spot the difference from the reality. We took this in a completely different direction, right? We made it fun. Interestingly enough, so this Ghibli style. Which led to OpenAI popularity, we had something very similar. Not just Ghibli, but general anime style and Snapchat. And this was the first major success of AI-generated filters. Another feature which OpenAI launched recently is called Cameos. Where people can insert themselves. Interestingly enough, we had Cameos products within Snapshot as well. So I feel that there is some, it feels like a circle. Meaning that the stuff, which was done five years ago can be just redone with new technology and that OpenAI is running this again. Although at HiggsField, we approach this completely differently. We empower professional social media marketers to make videos with AI. Fully synthetically, without physical production.

Pablo Srugo (00:05:25) :
We're going to move to HiggsField, but just maybe one more question on your previous company. How did you end up partnering with Snap initially?

Alex Mashrabov (00:05:32) :
We had an SDK. Back then, the SDK was. Look, it's not something to be really proud of but it was these face stickers. Face stickers can really be a replacement or alternative to GT. Around that time, you probably remember that Google acquired a company called Tenor and also Meta acquired Giphy. And then they had to divest Giphy. All in all, this space was very active and this gave me great insights. That's probably one of the key business learnings is that. There are many external forces, which actually form the markets. So it's way easier to ride the wave rather than trying to create one. So I think we were quite fortunate that there was a lot of movements in the markets in a segment which is adjacent to ours and that's how we're able to get the commercial interest. But then entering negotiations with all those companies.

Pablo Srugo (00:06:29) :
So you ended up selling the company to Snapchat. How long do you stay there?

Alex Mashrabov (00:06:32) :
Almost four years, Snap is an amazing company. I mean, Evan is probably the best. One of the best fundraisers in the world. Probably one of the best product designers in the world. He's definitely the best product designer I have ever worked with. So Evan is a genius. He is no longer a teenager. I think he understands needs of different user groups very, very well. Also, Bobby, obviously, he was my Direct Manager. He's a very nice person, very passionate about AI and developing the community. The team at Snap was generally amazing. I mean, Jack Brody now is CPO at Suno and I see many other ex-Snap people working in the space of media AI.

Pablo Srugo (00:07:14) :
It is pretty crazy, what they've accomplished with what was originally a sexting app, but certainly a communication with disappearing images app and then to turn it into something that's really, like, young people. I mean, it's kind of the default place that they go and just all these different experiences. It's one of these from the outside looking in, you might not understand. If you're a big Snapchat user, it just becomes kind of seamless.

Alex Mashrabov (00:07:34) :
I think they made multiple product innovations. More than almost any company in the world really. I think, it just gets harder and harder. A lot of innovations today really requires AI and frankly, you probably can see Snap's top price and the gross margins are relatively thin compared to Meta. So I think for Snap, it's way more difficult to innovate. To bring new innovative products with AI capabilities. They still do work a lot in the space of image generation. Although, I mean, my bet is very simple. That video tells a story while image doesn't and I simply believe that there are going to be many more stories told using generative AI in video formats.

Pablo Srugo (00:08:21) :
What year did you leave Snapchat?

Alex Mashrabov (00:08:23) :
It was September 2023.

Pablo Srugo (00:08:26) :
And do you leave to start HiggsField?

Alex Mashrabov (00:08:28) :
Yes.

Pablo Srugo (00:08:29) :
Tell me about the origin story. How do you come up with HiggsField, the idea?

Alex Mashrabov (00:08:32) :
First, I think I knew Yerzat since childhood. So I grew up in a city called Chelyabinsk. It’s a city in Russia, but it’s on the border with Kazakhstan, and my father is from Uzbekistan. So I naturally very much affiliated with Central Asia, and there is a competition called Zhautykov Olympiad in Kazakhstan. Yerzat was one of the top kids in physics. I was one of the top kids in programming, and that’s where we met each other for the first time. And then over time, we didn’t maintain a relationship closely, although he was a very prominent figure in the early reinforcement learning space. When I dived into this space of these large models, I realized that ChatGPT’s success is really attributed to reinforcement learning as well, and I just went through all my address book to find people who are actually good at that. And Yerzat was one of them. And then our early investors at HiggsField — they definitely helped a lot to make this happen. Several people introduced us, helped us build these bonds, helped us assemble the initial team of five AI engineers, who are still in the company. And then we decided to go full steam ahead. In November, I think, in December we signed our first term sheet. It was the seed round led by Menlo Ventures, and then recently we raised our Series A.

Pablo Srugo (00:10:00) :
But walk me through. I mean, just, you know, it's one thing to figure out who you want to build with and maybe the general space of. Obviously you saw ChatGPT and GenAI, and everything that was happening. But did you pick video at the outset? And, why video?

Alex Mashrabov (00:10:12) :
So, first and foremost, I think that’s where I have an unfair advantage personally, as I spend more time in the video AI space than anyone. Second is, I think I understand the problem of SMB marketing teams really well, just based on all the time I spent on Snapchat, and third is, it felt to me back then that the space of LLMs was pretty much formed. So, October 2023, it felt to me that it’s going to be difficult for anyone to really compete against Anthropic or OpenAI. Most infrastructure companies we talk about together — Base10, Modal — they already were quite strong even back then, and there were many, many others as well. So it felt to me that the LLM space was already very overcrowded. Coding space is something where I and Yerzat share a lot of passion, and this space felt overcrowded as well, and it felt to me that video AI is a completely new space where we can jump in very early and ride the wave.

Pablo Srugo (00:11:19) :
What was the vision for the first product? Because you talked about SMBs as being kind of the ICP that you had in mind. What did you want to do for them?

Alex Mashrabov (00:11:28) :
Oh, this is a really good question. First and foremost, I think I was always impressed by the success of TikTok and CapCut. Because it really started from TikTok being one of the largest advertisers on Snapchat and then TikTok. Three years forward, is the largest social media in the United States and on average, users spend ninety minutes on TikTok. While on Instagram, I think it's forty-seven and Snapchat around twenty-nine. This is the publicly available data, so and they achieved this just in three years. Most importantly, YouTube, Meta, Snapchat, they don't have video editing capabilities as CapCut provides. So that's why it felt like, CapCut clearly doesn't capture all the use cases.

Pablo Srugo (00:12:13) :
Maybe tell me a bit about CapCut. Because for those people that aren't. Let's say, big immersed into video. They might not use data or kind of know exactly what it does.

Alex Mashrabov (00:12:20) :
Yeah, for sure. So, let’s maybe start from Adobe, really. Adobe is a massive 300-billion-dollar market cap company, where most of their revenue comes from Photoshop, and the reason why they achieved that is Photoshop is a perfect solution for image editing for professionals and for non-professionals. Let’s say people who need to make some, I don’t know, some poster — they still can use Adobe. We still saw that Canva came out and they took over some other use cases, especially related to graphic design and presentations, which actually Adobe couldn’t cover, but Photoshop is generally an amazing tool. In the video space, it’s not like that. Adobe is very, very vulnerable, as Premiere has great penetration across professionals. I think NPS for Premiere across professionals — think about people who are editing movies — I think it’s extremely high, but Adobe After Effects and other video tooling, I think, kind of fall short, and they don’t really cover, let’s say, prosumers and individual professional creators. They only cover people who work in large teams. So it feels to me that there is a massive segment of the video market which Adobe doesn’t address at all, and CapCut addresses maybe the most casual creators with a lot of AI-native functionality, like adding captions, removing noise from audio, and so on. Surprisingly, CapCut, I think, is a top-ten application in the world. It was never possible to achieve that if you’re just a niche tool. Typically, all the top applications either have utility, like Uber, for example, or they have directly associated utility, or it’s a banking app — because, I mean, large banks, especially in some other countries, have over fifty percent market share — or it must be a social network. I’m still shocked how CapCut achieved that. I think there is definitely some connectivity and network effect between TikTok and CapCut, but I think it’s still very, very exciting. So over a year, we did experimentation in the space of mobile apps as I was trying to figure out the secret sauce, how CapCut is so successful, and then we actually saw — I think September or October last year — I actually saw that more and more people who work on just social media videos use CapCut on desktop. CapCut also built their desktop version, and I started to notice more often and more often that CapCut is actually being used on desktop.

Pablo Srugo (00:14:49) :
Oh, because originally people were using CapCut and they're editing on mobile. You know, because it's mobile video in many cases.

Alex Mashrabov (00:14:54) :
Yes and I think that back then I realized that, look, it's very difficult to change. I mean, generally it doesn't change the economics. When I look at free to paid ratio for OpenAI, even outside of India and Southeast Asia. We still talk about five, six percent. It's very similar to any B2C SaaS, really, and we all know that historically mobile apps are relatively hard to monetize. As they have just less sticky usage patterns and also, we know that most of the work in SMB marketing teams. Still requires some rounds of approval and collaboration. Which is simply impossible to achieve on mobile. That's why we decided to launch Pixel.ai. Which is a web-first solution and since then, we're riding this wave of crazy growth.

Pablo Srugo (00:15:39) :
Are you kind of, I think it was called Sketch or something like that. Figma to Sketch, or Figma to Photoshop, but this is what you are to CapCut. Is that kind of a way to think about it?

Alex Mashrabov (00:15:47) :
The way I see the companies, like major agencies, we talk about top ten agencies. Most of them, nine out of top ten, use HiggsField. The way they do that, by the way, is they don’t use HiggsField exclusively. They obviously use our competitors. I talk about companies like Runway and probably MidJourney. So what happens is all these systems are used very actively for storyboarding. The budgets for storyboarding are contracting and no one has resources to really do this manually, so they have to leverage AI. But then it gets uploaded to Miro. So I think Miro definitely capitalizes a lot on all these AI image generation use cases and storyboarding, and generally on brainstorming. That is bread and butter for Miro. So for us, there is definitely a goal to make sure that we can let these AI-native teams run the whole production within HiggsField. And when I say AI-native, I mean teams who make synthetic media, and there are lots of companies that still do shooting with physical production and then use HiggsField to generate VFX effects. This is an interesting segment, but this is not necessarily what we are pursuing. We are pursuing those AI-native creative marketing teams, and we empower them to run the whole production cycle, including collaboration and the approval process, all within the HiggsField platform.

Pablo Srugo (00:17:06) :
But do you replace CapCut? Or is this on the side of CapCut? They end up using both?

Alex Mashrabov (00:17:10
The most popular software, website, whatever. Which is used alongside with HiggsField is definitely CapCut. Because after those shots are done, there is some final touches regarding color correction. Maybe audio alignment, which still needs to be done in CapCut.

Pablo Srugo (00:17:25) :
And then walk me through these use cases. You're talking about, the main use case for Higgsfield is people that are building completely AI generated video. Is this like you're shooting some longer video or whatever and you need the B-roll? And this is the main use case? Or is it like entire minute clips that you're putting on TikTok and it's fully AI generated? What do you see is the top use case for GenAI video these days?

Alex Mashrabov (00:17:47) :
Totally, so by popularity, I think brain rot is the most popular one for sure. By brain rot, I mean all these videos with cats stealing kids. You know, they are on. It is absolutely crazy, it is a separate universe. Most of this content is animated and that is not what we are pursuing. We are empowering social media marketing teams to talk about their brands, and that still needs to be synthetic, and at the same time they care a lot about consistency and product placements. It is getting challenging to get actors of various demographics. For example, it is difficult to find a fifty-year-old who can shoot a TikTok-like video, although it is very easy to do this with AI. So lots of brands today are creating their own AI spokesperson and then enabling storytelling with product and consistency throughout the video. I think the sweet spot is maybe twenty to thirty second long videos. That is where TikTok really shines, and the goal for us is to really make sure that teams can start getting more value from social media. Let me please zoom in here. So what we have seen with, let us say, MrBeast, because he is the most vocal person about that. He is the guy who mastered all this AB testing more than anyone else, I think, because he is AB testing AI-generated thumbnails. He is AB testing the first ten seconds of the video to increase the conversion rates. Where HiggsField is definitely ahead of everyone else is HiggsField empowers data-driven experimentation as well. So it is very early, but there are those companies who want to be able to be like MrBeast. They want to AB test how the actor should look in the video. For some demographics, Alex may be a good spokesperson. For most other demographics, I think Pablo would be a better spokesperson. And then there are maybe ten other variations of spokesperson whom a brand wants to test. And then I think there is a question of how we can orchestrate this across different social media channels, how we can make sense out of the data. Those are the questions which social media advertisers ask themselves, I think, every day. It is very difficult to make sense out of multiple campaigns running across multiple channels, and I think this is where HiggsField thrives. I think today HiggsField is already used a lot for experimentation and we are adding a data layer on top of that and more automation layers on top of that so that especially SMBs can leverage the most out of advertising on social media and really leverage AB testing frameworks.

Pablo Srugo (00:20:18) :
But that means you can go all the way from producing a video to actually. Through HiggsField, putting it out on social media, getting the feedback back, and then just saying, like, what? Change this shirt, change the skin color, change this, whatever. Iterate on a bunch of different things until you find the video that does best, and that's the one you really put money behind.

Alex Mashrabov (00:20:34) :
Yes, yes, and this is what top advertisers are doing today. So, I mean, top advertisers on Meta. I think this is, I am not sure this is public, so I will give you high-level numbers. I think on average, companies which spend over ten million dollars in advertising on Meta are launching more than ten thousand new ad creatives every year. So, I mean, let us say it is two hundred and fifty days which people maybe really work, and we are really talking about a very high pace. We are not talking about just four or five videos a day. We are talking about forty, at least forty videos a day, and this requires some level of automation. Most companies today cannot do that, and I think it requires a lot of manual labor. And I think eventually platforms like HiggsField will empower creatives to scale the production but also to make sense out of the data.

Pablo Srugo (00:21:29) :
And just as an example, so we had Hydra on the show. I'm curious, where do they play relative to you guys when it comes to Gen AI video?

Alex Mashrabov (00:21:37) :
So first and foremost, I think Hedra is enabling net new types of experiences, which MidJourney or HiggsField do not enable at all. I think they enable real-time experiences as well. So it is very difficult for me to talk about other companies. I just really want to acknowledge that real-time experiences, talking to an avatar in real time, unlock huge amounts of use cases from product support, customer support, and entertainment. And I think there are some companies which are working on real-time video generation, which is another set of use cases. In the social media space, there is this sweet spot, maybe for smaller teams, to post from five to maybe twenty videos a day. When I say twenty, I just mean that if an account is small or the budgets are small, you would not get data for more than twenty videos. You need to have meaningful size in the budgets to really run data-driven operations at the scale of hundreds of new videos a day. I think we are in this sweet spot where we are empowering organizations to make between maybe ten and maybe two hundred videos a day. Two hundred is for larger organizations and maybe for smaller organizations, and eventually the goal is to make them successful on social media.

Pablo Srugo (00:22:58) :
We have tens of thousands of people who have followed the show. Are you one of those people? You want to be part of the group, you want to be part of those tens of thousands of followers, so hit the follow button. And I am curious then on your take. This is just an interesting space that is developing. When we think about AI-generated video on social media, where do you think this goes? I mean, obviously everybody knows about Sora now, Meta AI. These are AI-only worlds, let us say. And then you have TikTok or Reels or whatever it is, where frankly it is probably some kind of a mix. But I guess my question to you is, is everything going to be AI? Are you going to have a separation of AI here and humans here? Is it going to be a mix? Where do you think that goes?

Alex Mashrabov (00:23:37) :
Yeah, that is a great question. I want to give an acknowledgment here first for YouTube, because they are the only company who actually, maybe a week ago, said that they want to fight AI-generated deepfakes. I think that is what they said, essentially, maybe in softer terms, but I think they actually fight for authenticity of the content. And this is not easy, because YouTube and Meta are really cash-printing machines that never existed in the world. We expect maybe most of the content on socials will be generated with AI, and then it is going to change the dynamics. It is going to change the dynamics of what is the CPM, what is CPA, and so on. And it is really exciting to see that some companies are willing to take a risk and really do things right, while most companies clearly are going to be guided by revenue opportunities. And I think it is very hard to predict that. I simply believe that it will be difficult for smaller platforms to make Gen AI video generation systems, and essentially companies like Pinterest, Snapchat, and many others will have to have some AI-generated capabilities. We already see smaller advertising technology companies using HiggsField through API. Our API is not released yet. It is coming in the next few weeks, as we want to make sure that those advertising technology platforms can leverage HiggsField technology as well.

Pablo Srugo (00:25:09) :
But I mean, Snapchat is an interesting one, right? Because Snapchat is real time. Obviously there is a social network component, but a lot of it is more of a messaging component where you are talking to people that you know really well. And so you are maybe sharing using filters and these sorts of things, but it is not like you are trying to get ten million views like on TikTok. But I guess on a TikTok or a Reels, if you fast forward five years, just seeing where things are going, being at the tip of it, is it eighty to ninety percent AI and we just accept that this is the way it is? Or is there any reason to believe that people are always going to want to see more humans? I am kind of jaded. I am like, I do not really think so, but I am curious where your mind is at.

Alex Mashrabov (00:25:45) :
Look, I think that content on TikTok today is not, like, look, maybe TikTok is fine, but I open my Instagram Reels feed and it is not really diverse. I think the content on socials is not diverse. When I look at Hollywood and the TV space, there is way higher diversity across various genres. Clearly, vlogs and shooting my own life is an important segment. It is not going to go away. I think it makes a lot of content on socials today and it is going to get more diverse, and AI is going to be an enabler of that. So great podcasts are going to stay. People who have a unique perspective in the world will stay. People like MrBeast are going to stay because he is pushing the limits with social experiments. People like iShowSpeed are going to stay as well. He is just a media personality. People always want to watch humans, but I think there are lots of people in the middle, the messy middle, where those creators will be challenged with AI, and we are actually embracing that there is a new type of AI creator who is probably very good at storytelling. Maybe they are not as good-looking as you are, right? Or they cannot behave on camera. They cannot come up with hundreds of shows as you do.

Pablo Srugo (00:27:10) :
And then they just get these new opportunities for expression. I took you on a crazy tangent there just because I find this space so fascinating, but let's go back. You raised the seed, $8 million, and you have this idea for Gen AI for SMBs. What do you do? What is MVP number one? And then ultimately, how do you launch that?

Alex Mashrabov (00:27:26) :
So first thing, it felt like we needed to start with AI-generated humans and just see how it works. We developed a lot of technology related to, we had image generation, we had video generation, we had character generation. Though we really got deviated, to be honest. I think this is the problem with mobile apps. All these apps where people can upload photos of people and make them kiss each other or hug each other. Then also all these use cases related to, how to say, reliving your memories. Look, I mean, this drives a lot of adoption. We kind of left the course, to be honest, because we saw that these simple use cases drive a lot of adoption, and it felt like this is what the market is telling us. But there is no real business in that.

Pablo Srugo (00:28:17) :
So you were powering these like kind of consumer like play with your friends and create funny images. Yeah.

Alex Mashrabov (00:28:22) :
Yeah, this is absolutely right that we decided to deviate from that, to be honest. Frankly speaking, OpenAI clearly wants to take this market. There is no doubt about that. OpenAI wants to be a winner. They are willing to subsidize regenerations. That is why I am actually very happy that we decided not to pursue this anymore. And I think today, when it comes to the HiggsField core use case, we found a way to drastically differentiate, let us say, from OpenAI and from Google.

Pablo Srugo (00:28:51) :
When did you make that shift? Like, how long did you go down this path and when did you decide to not play that game anymore?

Alex Mashrabov (00:28:58) :
So we started to work with major publishers in the world. We actually saw the place of short-form dramas as well. Short-form dramas is now a $10 billion market. It's enormous.

Pablo Srugo (00:29:08) :
What is that? I mean, honestly, I don't even know it. What's a short-form drama?

Alex Mashrabov (00:29:11) :
Oh, I can give you a very short scientific fact sheet. So dramas itself is the most popular kind of format in the world, and this is true for the last hundred years. I mean, it is true for our mothers, grandmothers, great-grandmothers. For females it is definitely absolutely dominating, although I believe that across men it is the most popular genre as well. Maybe the margin is a little thin, but it is still the most popular genre in the world. Over forty percent of people in China have to spend at least sixty minutes a day on commute, and the public commute system is next level in China. It is not like what we see in, let us say, LA or SF. It is more like New York level, but think New York but cleaner. And then what happens is those people have to spend time, and the adoption of mobile phones in China is crazy. People do not like to hold the phone like that. They do not want to hold it horizontally. They want to hold it vertically. Essentially, these short-form dramas are just hundred-minute-long videos based on some of the popular fan fiction or some mid-tier books.

Pablo Srugo (00:30:21) :
What's short-form about that? Is it 100 minutes? Is it like a movie?

Alex Mashrabov (00:30:24) :
Absolutely. That's a good question. Many people call them vertical drama.

Pablo Srugo (00:30:28) :
Okay.

Alex Mashrabov (00:30:29) :
But the format is completely different. Essentially, they have higher average revenue per paying user in month one than Netflix. It is absolutely crazy. Netflix invests so much in originals, absolutely novel stories, unique IP. These vertical dramas do not have any of that. At the same time, they are very heavily monetizing upfront. Retention is a huge problem, but they can monetize users thirty to forty dollars in the first month. It is absolutely crazy. The way they achieve that is they slice this hundred-minute-long video into one-minute-long segments and then they show a paywall every seventh video. So the way it works is at first you think, oh, two dollars, but I just want to see if it is going to be that stupid or it is going to be okay. You have already watched seven minutes, so you pay two or three dollars to continue watching. Then around maybe minute thirteen or fourteen, they try to involve new characters, and it is not just one storyline but two or three storylines, and at least one of those storylines is a little bit interesting. Then you think, I am going to pay more. Since you have already invested fifteen to twenty minutes, it feels like maybe you will buy a subscription just for the first month to finish watching, but then you are going to cancel. And then you finish watching that, and they give you some free perks, a very aggressive gifting system as well. I think these guys completely hack user mindset. What is important to say is that this is a new media format. Most of the dramas today, meaning books, are all about vampires and werewolves, so it is fantasy. AI-generated dramas are naturally going to come as well. I think there is no reason why books are all about fantasy dramas, but these vertical dramas do not have any fantasy dramas yet. And also, what is interesting about this space is we already see companies creating choose-your-own-adventure type content where users can sort of influence the story. It does not exactly work this way, but conceptually they leverage that a lot, and this is a strong lever for them to monetize even more.

Pablo Srugo (00:32:49) :
And this is something that HiggsField is powering or going to power is kind of the AI for this.

Alex Mashrabov (00:32:54) :
We were considering that.

Pablo Srugo (00:32:55) :
Okay, I see.

Alex Mashrabov (00:32:56) :
And then what happens is we realize that those companies actually are content machines. These companies actually make tens of thousands of ad creatives a month, not a year as I was telling you before. They make tens of thousands of new ad creatives a month. So we build some proof of concepts where AI-generated ad creatives can outperform non-AI, and we realize that we should not even think about short dramas. It is a very narrow segment. We should play in the broader space of social media advertising.

Pablo Srugo (00:33:26) :
When did you decide to go all in on social media advertising?

Alex Mashrabov (00:33:29) :
January this year.

Pablo Srugo (00:33:30) :
And how did you go to market on that?

Alex Mashrabov (00:33:33) :
Look, I think back then the market was completely different.

Pablo Srugo (00:33:36) :
Back then, right? Like it's 10 months ago, but it's crazy how fast things move.

Alex Mashrabov (00:33:39) :
It is absolute, it is crazy. Yes, it is absolutely crazy. Now there are like six copycats of HiggsField. Back then HiggsField felt very authentic. It felt like they typically care about camera control, they care about VFX, and they want to access all the best models. This is as simple as that. There are just movie directors, they have a set of techniques, and no one did this for video AI, but also professional creators want to access all the best models. But it is difficult for them to keep up with that. And we built a layer on top of other models, but also with HiggsField proprietary models. And we felt like this is going to be an absolutely novel solution. And we decided to work with AI educators to promote the system.

Pablo Srugo (00:34:23) :
Tell me a bit more. Yeah. Tell me the specifics. I mean, go to market is just the thing that every early stage founder really cares about. So we'd love to go deeper on that. Like, what exactly did you do to get your first, you know, 10, 20, 30 customers?

Alex Mashrabov (00:34:33) :
Absolutely. Absolutely. So let us maybe start with MidJourney. I think this is where we spent a lot of time analyzing that there are lots of people who just naturally love animated content. Think about anime, cartoons, and so on. And MidJourney just found a way to become the most popular technology platform across these demographics. There is definitely a downside of that, as they had to make sure that the most popular styles can be reproduced very precisely on the platform so that the fans can recreate that and engage. So basically they got a lot of fan fiction, all these people who like to do their own version of the same story. They want to do their own comics and so on. They got a lot of this audience as they were the place to build animated content, but also content made in a specific style that is very trendy. So I think that worked really well for us. I think in around March or April there were models that actually could do product placements, and we seized the opportunity. There are so many creators on YouTube, Instagram, and Twitter who actually are AI educators, who tell the audience how to make content, how to leverage all the best tools. And we made sure that we can position ourselves to be the best platform for this use case.

Pablo Srugo (00:36:02) :
Is this a paid thing, like you go to these creators and you pay them to talk about you or is it just a matter of being the best for that use case and they just do it organically?

Alex Mashrabov (00:36:09) :
It's a good question. We started to engage creators very closely only in September. I will tell you it is not very easy. So our marketing team is a total of six people. It's not a very large team.

Pablo Srugo (00:36:23) :
How many people in total, by the way, at Hixfield?

Alex Mashrabov (00:36:26) :
Total in HiggsField is probably sixty people today. Six of them are in marketing. Most are AI engineers and engineers. Initially all the creators were posting for us for free. There were maybe just a few where we had some affiliate program type of arrangement. Most of the posts were absolutely free. So the organic to paid ratio was like five to one or six to one. Now it is more like two to one. Although there is a challenge where we start working with more creators and we push this affiliate program even further, there is a little bit of the challenge that we cannot fully control which content they make. You know, these agencies can really try to find some controversial topics to get higher CPM and so on. For us, it is always a challenge. On one side, we invest a lot in educational content, but as we work with affiliates, we want to make sure that we do not lose momentum and HiggsField is used for various use cases, not even just social media marketing. But in the meantime, we want to make sure it is done responsibly. I think this is definitely one of the key areas of internal discussion we have as the affiliate program becomes a little harder to control.

Pablo Srugo (00:37:39) :
What's the cost like? Is it per seed per month? Is it per video? And regardless, how much does it cost these days to create like a minute long AI generated video?

Alex Mashrabov (00:37:48) :
Yeah, that is a good question. So if you go and just hire an agency, let us say you go and hire an agency in Southeast Asia, that is where we, by the way, see it is a completely net new economy in this region. They will charge you three thousand to five thousand dollars per video. Typically, I think they would estimate that production and post-production roughly cost the same, as they will do some work in Premiere to make the video look really good. If you go to the teams in the United States, then the price tag starts from thirty thousand. It is still way cheaper than physical production, although it is not very democratic. So if you hire someone in-house and you task them with making maybe two fifteen-second-long videos a day, they will be successful with that. Prompt engineers in Southeast Asia may have a rate around three thousand dollars a month, so you can get to meaningful velocity. I talk about forty videos a month with relatively modest costs. If you are running these operations in-house, you can really get to the cost of good-looking video, maybe under one hundred dollars a video for socials. This is exactly what those large brands pay to TikTok creators today as well. So when we talk to brands, they all do pretty much the same playbook as we do in HiggsField, as they work with those marketing agencies, and running an affiliate program with them, we are basically paying by performance. But there is still some base cost per video, and typically this base cost per video on TikTok starts from maybe fifty dollars per video. This can go as high as three hundred dollars per video. So AI-generated videos are roughly the same cost. At the same time, it unlocks a completely new type of visuals which was never possible. And it also allows you to keep more operations in-house, because managing these external creators and whatever they produce is very difficult. So I think there are lots of benefits for marketing departments to build maybe smaller-size in-house AI-native teams for content production.

Pablo Srugo (00:40:04) :
And then how fast have things gone like in terms of revenue or like numbers of videos that you're generating a month?

Alex Mashrabov (00:40:09) :
I am not sure I can talk much about that, but I am absolutely surprised to see that on average users spend over ten hours a month on HiggsField. I do not want to do click baits for you, but it feels to me that Adobe is in huge trouble. Remember that where I started is that Photoshop is used by everyone on the image side. What we are seeing today is that a lot of image editing happens with the AI models as well. Instead of just downloading from HiggsField, uploading to Photoshop, and back, people use models like our recent Popcorn models or maybe Google Nano Banana model or maybe ByteDance model to edit images with prompts. People are making thousands of image editing operations on HiggsField on demand. It is an absolutely new economy. That is why I am telling you it is like the market today. But before that, we kind of did not feel that the whole workflow could be done on HiggsField. Now I feel more and more that it all can be done on HiggsField. I am very surprised by that.

Pablo Srugo (00:41:16) :
Perfect. Well, let me stop it there. I'll ask the three questions we always end on. Maybe the first one is, when did you feel like you had found true product market fit?

Alex Mashrabov (00:41:23) :
Yeah, the product market fit is… My mother called me, and trust me, said, did you see that Madonna post, it is an effect made with HiggsField. I know this is more like a joke, but I think the reality is that we felt like we get true organic reach by professionals. We have seen other major creators posting as well, and it told me that it resonates. It is not just our go to market efforts and creator program. It is not just that. It seems like the product is definitely resonating. At the same time, I think we are challenged to redefine ourselves every three to six months, to be honest. The space is evolving so quickly. We see that major companies like ByteDance, Google, and OpenAI care about that space. The default choice for every professional will stay with Google. No one ever gets fired by using Google models, I guarantee you. The same with Adobe. So we have to win by a faster pace of innovation and iteration. The product has been live for more than two hundred days, and we still push updates, like product updates, every day except Sunday. On Saturday we do all the silent releases where we improve UI and UX, which is not going to go viral on socials. Although on X we make our best effort to post every day and show our progress.

Pablo Srugo (00:42:47) :
And then on the flip side, was there ever a time in the last few years, a couple of years doing HiggsField, where you thought things would just not work out and you might just completely fail?

Alex Mashrabov (00:42:55) :
Of course. First, previous year, complete struggle. I think there were some moments, maybe like a few days last year, where it felt like things are working. Most of the time, it felt like things are not working at all. And since I am the CEO, I obviously felt like I'm just taking company in the wrong direction.

Pablo Srugo (00:43:17) :
And then last question, what would be like your number one piece of advice for an early stage founder looking for product market fit today?

Alex Mashrabov (00:43:23) :
This is a great question. The way I think about that is, look, I have a few hot takes. So first of all, there are lots of companies who are raising gigantic rounds, like seed rounds of one hundred million dollars, two hundred million dollars, at a valuation of a billion dollars. I think it is very difficult for me to see any other company than OpenAI and Anthropic being competitive in the foundational model space. It is just very, very difficult for a startup to be a foundation model company. I do not want to say anything bad about companies which are trying to do that, but I just personally do not believe it is possible. Then it all is about value creation. So I think ultimately every founder working in the space of Gen AI needs to ask, what is the value which this model unlocks to my customer base, and how can I shorten the tax on learning this tool. Because all the tools, all the models, have some taxation on learning it, right? It is not easy. That is why having very strong UX product designers and having product-oriented front-end engineers becomes critically important, because they need to develop the experience. And then there may be an internal AI team and prompt engineers who can build some automation. But it really all starts from deep understanding of customer problems and asking the team and yourself every day the question, how do these new models that typically come out every week make this workflow easier? The way I see it, it is a net new technology, and the pace of it makes it critically important to keep learning every day.

Pablo Srugo (00:45:12) :
Perfect. Well, Alex, thanks so much for jumping on the show, man. It's been great having you.

Alex Mashrabov (00:45:15) :
Thank you.

Pablo Srugo (00:45:16) :
Wow, what an episode. You're probably in awe. You're in absolute shock. You're like, that helped me so much. So guess what? Now it's your turn to help someone else. Share the episode in the WhatsApp group you have with founders. Share it on that Slack channel. Send it to your founder friends and help them out. Trust me, they will love you for it.