C2V May Notes From The Trenches
Welcome friends! We had our annual LP & Founder Day earlier this month. As always, it was a great few hours, very much made so by our incredible LPs and founders. We have more from that event below, but first…
As we were putting together our panels for the event – both of which were, of course, AI-focused (what else does a VC talk about in 2025, after all) – we were frankly blown away by just how many companies we now have using generative AI in their products, including those adding genAI-powered features to products built well before ChatGPT’s debut, and how many we have who built entirely new products that would not even have been possible before the advent of generative AI.
We also got some really interesting industry commentary from our panelists, so we figured it was worth a quick rundown on both (quick by our standards, anyway).
At a time when generative AI is seemingly all anyone in this industry talks about (and where nearly all of the money seems to be going), it may seem a bit strange that we’re not counting our AI companies daily. Still, we don’t think it is when you consider that:
We have not “pivoted to AI” like so (so) many of our fellow VCs have recently announced (this is the same crowd that previously pivoted to “future of work”, “web3”, etc. and generally speaking, have never seen a hot trend they didn’t love “from the beginning”… of the day they pivoted, after seeing some other VC’s LinkedIn post about it the day before).
We view the concept of a “pivot to AI” as a fundamental misunderstanding of what makes this software so valuable to an enterprise customer in the first place.
Which is to say:
We view generative AI in the same way we have long viewed predictive AI – as a tool, not a product. AI is, in both forms, an incredibly powerful tool and often a core component (if not the core component) of a product, but still just a component, not a complete product in and of itself.
In other words, without deviating one bit from our core thesis (investing in B2B SaaS and robotics productivity tools for “dirty, dull, and dangerous” industries) and without specifically targeting, nor limiting ourselves to “AI companies” or “AI products” they are all over our portfolio at this point (and increasingly so over time).
More on this below, but first, a quick refresher on recent AI developments, it’s history thus far, and how to read the various terminology you see all over the startup press.
Background & Terminology
First, we think it’s important to understand that algorithms capable of learning, evolving, and generating novel insights based on the data sets available to them have been in widespread use for at least two decades (often called “predictive AI” to distinguish it from the natural language version, but essentially what all AI is at its core). So, it’s not that AI is brand new, but rather the successful application of these concepts to human language (called “generative AI”) is new.
The primary reason we think it seems to most people that AI just started showing up in software applications two and a half years ago when ChatGPT launched, is precisely this ability we now have to interface with software applications using natural human language, and (via ChatGPT and others) the fact that literally anyone with an internet connection can try out a version of it.
By contrast, the value added by the predictive AI that has been around for years has generally happened in the background of applications, so while an end user who upgraded their data-parsing app from one using an old rules-based engine to one using predictive AI would surely notice an improvement in outputs, they wouldn’t otherwise know that those improvements were the result of the software teaching itself to produce better results (versus just a better rules-based engine).
It is also this language-based, generative AI that nearly everyone refers to when they use the generic term “AI” these days, because, again, this version is far more tangible to the average person.
We further believe it is this natural language interface that has fooled a lot of people (including many who should know better) into thinking this technology is far more advanced than it actually is, but we’ll come back to that.
First, a look at where we see the value in AI and the extent to which it has made its way into our portfolio (in particular, the new generative AI).
The True Value of AI & Our Portfolio
We have previously written about our view of how the generative AI stack will shake out over time (including in both our 2024 and 2025 annual predictions pieces), with a handful of foundational large language model (“LLM”) companies training basic models upon which another layer of products built for specific industries and use cases will sit, and that seems to be how things have more or less gone so far.
While we have shared some skepticism around the valuations and extreme concentration of capital being thrown at these foundational LLMs, they are sufficiently capital intensive that neither we nor really any of the VCs as much as 25x our size (i.e., most of the industry) really has the wherewithal to play in that space (and better Andruyoshi Sonawitz’s money than ours).
So, when we talk about the counterpoint to our view/approach to investing in generative AI applications, it’s not the LLMs we’re referring to, but rather those companies within the application layer that are AI-first, use-case-later (of which there are many and with substantial funding).
We alluded earlier to how much of our portfolio now incorporates AI as a core piece of its products, and here are those numbers:
Fully 60% of our investments since the beginning of 2023 (a few weeks after ChatGPT’s public debut) use generative AI as a core component of their product.
40% of our post-ChatGPT investments fall into the aforementioned category of companies whose products weren’t even possible prior to the advent of generative AI
55% of our pre-ChatGPT companies have now added new generative AI-powered features to their existing products
80% of our companies added since 2023 use some element(s) of predictive AI (predictive + generative totals more than 100% because several companies use elements of both)
100% of our post-2023 companies incorporate at least one or the other as a core component of their products’ functionality.
Clearly, we think extremely highly of AI’s capabilities; let there be no question about that. We very much share the general market sentiment that both established predictive AI and newer generative AI are powerful tools with the potential to dramatically enhance existing SaaS platform capabilities, while also powering entirely new solutions that were not previously possible, and expanding companies’ competitive advantages exponentially over time as AI engines are trained on increasing quantities of proprietary, industry- and application-specific data sets.
To reiterate what we noted above, however, these numbers are what they are because the products we believe provide maximum utility and ROI to their enterprise users are overwhelmingly those employing AI, not because we’ve suddenly decided everything must be AI. It might not sound like much of a distinction, but we believe it is and will be a material differentiator between companies that succeed and those that do not.
This ultimately comes down to our belief that founders and investors alike should be viewing these AI tools as features rather than standalone products. In our experience, the most powerful technology in the world will not be of much use (nor sell particularly well) to enterprise companies unless it comes off the shelf capable of solving customers’ most pressing problems (i.e., without further customization), and non-technical personnel can quickly grasp a product’s capabilities and easily incorporate that technology into their daily workflows.
This means that even in cases where AI functionality may be the core foundation for an enterprise SaaS platform, the most effective, best-selling, and highest ROI AI products are still those that sit within a traditional SaaS framework, and conversely, powerful products that require high-levels of customization and user-sophistication don’t sell, or don’t stick (or some combination of both).
This is why our best-performing, AI-powered enterprise SaaS portfolio companies are always those whose products:
Solve specific, high-leverage problems for target customers
Have UIs that are both intuitive to a non-technical user and designed to align as closely as possible with those users’ existing workflows
Have equally intuitive reporting tools, similarly designed to mirror existing reporting and approval processes for both daily users and management teams.
Can deliver AI-generated intelligence and recommendations in an easily digestible format that is directly tied to each customer’s business goals (and ultimately, their bottom line).
If the backends of these products are juiced with powerful AI tools, great; if AI allows end users to do orders magnitude more via those same UIs without sacrificing their intuitive usability, even better.
Coming back to where AI is headed, as you will have gathered by now, we firmly believe that generative AI (and its predictive predecessor) is truly groundbreaking technology, in no need of any artificially inflated hype or hyperbole. But of course, Silicon Valley is going to Silicon Valley, so we feel compelled to do our civic duty as level-headed, candid realists and provide a quick reality check on some recent claims floating around the venture-verse.
What AI Can/Can’t/Will Be Able to Do
To be clear, we believe the capabilities and utility of this new technology are incredible, full stop. Software capable of reading, parsing, summarizing, and even replicating human writing, as well as supporting free-form, conversational user interfaces, is incredibly powerful. As we noted above, we’re not only seeing the majority of our portfolio companies put generative AI to great use, we have several whose products couldn’t even exist in a pre-generative AI world (it is THAT powerful).
That part of the hype is real and, for the most part, justified. What we’re talking about are the constant, evermore grandiose claims about what AI can already do, and proclamations about how quickly we’ll achieve things like artificial general intelligence (“AGI”) and a widespread replacement of humans in all fields and occupations.
Eric Schmidt (former Google CEO) was kind enough to give us a concrete example of just how overheated the hype machine has already gotten in a recent interview, where he said:
AI will have replaced “the vast majority of engineers” within one year (one year!)
We will have achieved AGI within 3 - 5 years.
Even for the bros who cry wolf in Silicon Valley, these are absolutely outlandish predictions.
(Incidentally, if one of these $100 billion LLMs ever goes under and takes down most of the valley with it, as it probably would, “The Bros Who Cried Wolf” would be a great title for a retrospective on the whole affair. Someone get Michael Lewis on the phone.)
Before we look at each of these claims, we thought it was worth pointing out that with respect to the AGI comment, Mr. Schmidt said “I call this the ‘San Francisco consensus’ because everyone who believes this is in San Francisco. It may be the water.” Pretty sure it’s not the water, but just as a thought experiment, maybe try talking to someone outside the metro area once in awhile (or even just once)? Just something to consider.
Anyway, taking these one at a time:
The End of Human Software Engineers
We asked one of our event’s AI panels to opine on this one, and suffice it say, they too thought it was preposterous (it actually elicited audible scoffs from a couple of panelists). Part of the issue here is that Schmidt and others in this camp are starting from a premise that seems to have been manufactured from whole cloth, that GenAI-based coding assistant apps are already producing 10x productivity gains (the implication being that they are already capable of writing extremely advanced code with minimal human oversight).
Well, nearly all of our companies are now using one or more of these coding-assistance apps. While we haven’t spoken with every one of them about their specific productivity gains, the consensus from those we have talked to seems to be that they’re seeing around a 3x bump. This is unquestionably a huge deal, especially for thinly capitalized early-stage startups, but 3x is a long (long) way from 10x, especially in this context.
More importantly though, per these founders with whom we’ve spoken, as well as all three of our panelists, GenAI coders are still making a lot of mistakes and/or just making things up (“hallucinating” in industry parlance), such that on average, 15-20% of the code produced is junk. Furthermore, it’s not that the AI coder will produce a finished product that's 80-85% correct and just needs some fine-tuning after the fact. These coding assistants must be consistently prompted and their output monitored in real-time by human users, lest the AI engine continue to build new code on top of any junk or even go completely off-script and write an entirely different application from what was intended. Our panelists suggested that today’s AI-coding apps could maybe build a very basic website more or less unsupervised, but anything more complicated than that still needs consistent, active human involvement and oversight.
In fact, per our panelists (and consistent with our view as well), these apps, regardless of how much they improve from here, won’t even reduce the number of engineers employed by tech companies, let alone replace them entirely; they will simply increase the amount of software that is produced. This indeed tracks with where we see the end-user market as well. We can confidently report that not only is the customer base in our corner of the tech world nowhere near saturated in terms of their automation needs, but it’s unlikely they ever will be, and the software industry will need to continue scaling to meet that ever-rising demand.
Again, we don’t mean to downplay this tech in an absolute sense; 80-85% success here is still a huge leap forward. It’s just that, well, this isn’t like one of those past fads (e.g., Web3) that did in fact need manufactured hype to be saleable. Here, what’s actually happening is plenty incredible on its own; we don’t need to make stuff up.
AGI Part I: Are We Actually That Close?
Whether GenAI is “smarter” than predictive AI is something for people brighter than us to opine on, but as far as we’re concerned, neither better approximates human-like intelligence than the other, it’s just that GenAI appears to be more human-like because it communicates with its users in those users’ native (human) languages. As we mentioned above, we think it’s this trait that is fooling people (again, including those who should know better) into mistaking this for true, human-level intelligence.
Algorithms capable of learning, evolving, and generating better and better outputs over time have been in widespread use for at least two decades (that predictive AI we mentioned earlier) and we haven’t heard anyone suggest that any of these platforms are showing signs of “general intelligence”, but now we’re supposed to believe that because we’re feeding these algorithms human language instead of statistical data, they’re suddenly going to become human? One VC’s opinion, but we think that is an incredibly simplistic view of what constitutes true human intelligence.
We would also note that our AI panelists were generally of the opinion that, having already been trained on all publicly available written language, these LLMs are going to start seeing rapidly diminishing returns with each successive release (with one even speculating that this might be the reason ChatGPT’s latest release keeps getting delayed). So, perhaps the SF crowd is simply extrapolating earlier gains out to infinity? Certainly possible but again, they should know better.
AGI Part II: Who Cares?
If we were actually close to achieving AGI, what exactly would that do for us? It sounds cool, but as we discussed in our March newsletter (in relation to our investment thesis), “cool” and “useful” are not always the same thing, so perhaps the more important question is, should AGI even be the goal?
As you may have guessed, we don’t think so. In fact, we think this whole endeavor misses the point entirely and will almost certainly lead to a huge misallocation of resources chasing something that is purely performative. To us, the point of building software is not to replicate human thought and capabilities and just make more of it; software is really (really) good at a lot of things humans are mediocre-to-terrible at (and vice versa), so to us, the most effective use of software development resources is and will always be to focus on its strengths and leave the rest of human intelligence to the abundance of intelligent humans we already have.
This is no different for us than the folly of the current race to build humanoid robots, which we discussed a couple of newsletters ago. Again, one VC’s opinion, but if you ask us, it makes a lot more sense to lean into the strengths of software (and robotics) than it does to try and make a bunch of poor-man’s versions of humans.
2025 C2V LP & Founder Day
What a day. What a community. What a reminder of why we do this.
Last week, we gathered our extended C2V family, founders, LPs, and friends for our 2025 LP & Founder Day. It was a celebration, a strategy session, and a reunion all in one. And yes, it stretched well into the night.
It was a moment to reflect on how far we’ve come, from a personal investment vehicle in 2014 to a mission-driven early-stage venture firm with:
60+ portfolio companies
160+ LPs
$32M+ (and counting) under management
A shared thesis that’s as gritty as it is grounded: investing in the “dirty, dull, and dangerous”
We’ve never chased hype. We’ve chased impact. Backing B2B SaaS and robotics companies using AI to solve real-world problems in sectors most VCs overlook. And it’s working.
To our founders: thank you for building the future one hard-earned customer at a time.
To our LPs: thank you for betting on a different kind of venture model.
To our sponsors, Brex, Goodwin, Carta, and Weaver, thank you for backing the vision.
We’re just getting started.
AI Gone Wrong? Now There's Insurance For That
As AI risks like hallucinations, model drift, and misleading outputs grow, the insurance industry is racing to keep up.
Portfolio company Armilla AI is leading the charge. In partnership with global reinsurer Chaucer Group, Armilla co-developed a groundbreaking AI liability insurance product, offering coverage for AI underperformance, false outputs, and legal claims tied to model failures.
It’s a significant step toward safer and more accountable AI adoption.
UptimeHealth Recognized as a Top Innovator in Enterprise Tech
UptimeHealth has been named one of Fast Company's Most Innovative Companies of 2025, ranking No. 7 in the Enterprise category. This recognition highlights the company's significant impact on healthcare operations through its advanced equipment management solutions.
Boostr Launches Agent IQ Series: A Purpose-Built AI Workforce for Media Sales, Planning, and Ad Ops.
Boostr has unveiled the Agent IQ Series, a purpose-built AI workforce designed to transform media sales, planning, and ad operations. This suite of specialized AI agents automates manual tasks, reduces errors, and enhances performance across the media lifecycle.
Bolt South Africa has partnered with Driver Technologies
Bolt South Africa has partnered with Driver Technologies to offer its drivers a smartphone-based dashcam app aimed at enhancing safety and security. This initiative allows drivers to transform their smartphones into dual-facing dashcams, recording both the road ahead and the vehicle's interior. The app operates in the background, utilizing a picture-in-picture display to confirm active recording without disrupting the Bolt app's functionality.