Did you know you can chat with PostHog AI in your native language? Lots of people do. It only said “you’re absolutely right” about 0.1% of the time, which feels like the right level of politeness for a product analyst.
About us
At PostHog, we're working to increase the number of successful products in the world. Until now, tools for building products have been fragmented. Product analytics, heatmaps, session recording, web analytics, feature flags, and A/B testing are all helpful, but no one wants to buy, send data to, and integrate multiple products. PostHog offers these tools (and more) in an integrated, open source platform which can be hosted in either the US or EU. Both versions are SOC2 certified, GDPR-ready, and HIPAA compliant. We started PostHog during Y Combinator's W20 cohort and had the most successful B2B software launch on Hacker News since 2012 - with a product that was just 4 weeks old. With over 100,000 users, we're default alive, growing 97% through word of mouth, and we are in the top 0.01% most popular repos on GitHub.
- Website
-
https://posthog.com/
External link for PostHog
- Industry
- Software Development
- Company size
- 51-200 employees
- Headquarters
- San Francisco
- Type
- Privately Held
- Founded
- 2020
- Specialties
- analytics, open source, product analytics, product, data, and engineering
Products
PostHog
Product Analytics Software
The single platform to analyze, test, observe, and deploy new features. PostHog is 10+ tools in one, featuring product analytics, session replay, feature flags, A/B testing, and more - all seamlessly integrated and open source.
Locations
-
Primary
Get directions
San Francisco, US
Employees at PostHog
Updates
-
PostHog reposted this
He’s built things, broken things, scaled things, and somehow made it all look fun. 🔥 Next week we’re hosting a fireside chat with Tim Glaser, co-founder and CEO of PostHog. PostHog started in YC Winter 2020 and quickly grew from a Hacker News MVP into an open-source product and data toolkit used by 190,000 teams. Tim’s mix of deep technical chops and a habit of rethinking how software should be built is a big part of what makes PostHog… well, PostHog. Join us on December 4th - link in comments 👇
-
-
PostHog reposted this
Is billing the most important team at PostHog? Here’s five reasons they might be: 1. They enable us to charge the way we want, which helps our pricing better align with the value it creates. We can price products, add-ons, and platform packages how we want. 2. They help us charge for new products faster. This means teams can get the ever-so-valuable ✨ customer feedback ✨. 3. They ensure the billing system has the flexibility to handle many situations like discounts, contracts, credits, invoicing, and weird payment methods (pay by cheque anyone? Please don’t). 4. They make sure billing is reliable. Charge too much and customers are unhappy. Charge too little and we literally leave money on the table. 5. They enable us to make more accurate revenue models and projections, which helps us with everything from hiring to fundraising to product roadmaps. Simply, they prevent billing from becoming a bottleneck. This was an important lesson I learned when researching everything we've done with pricing at PostHog. To read the other 5, check out my non-obvious pricing advice for startups → https://lnkd.in/gftKjjCK
-
-
PostHog reposted this
“Just this once” are the three most dangerous words you can hear in a successful, scaling startup. They mean you’re making a concession, but when success is soooo close, they can start to sound reasonable. - The person who’s 90% culture fit but has IPO experience? Sure. - The strategy that’s proven but doesn’t quite align with our values? Let’s try it. - An idea that’s weird and fun, but a little out there? Better not to risk it. This is how culture dilutes. This doesn’t necessarily mean the company will become toxic or negative, just boring. You stop doing the things that made you special in the first place. It turns out culture isn’t some abstract value statement printed on office walls. It’s every person, decision, priority, and judgment call about what matters. Dilution happens gradually and it’s almost invisible while it’s occurring. - When you’re 20 people, founders set the tone directly through every interaction. - When you’re 200, managers are making calls based on what they think the founders would want, filtered through their own experience and priorities. - When you’re 2,000, you have managers managing managers, and now you’re playing telephone across multiple layers where each person is adding their own interpretation of “what we value here.” Across enough managers and interpretations, the culture becomes distorted. The edges get sanded down, the things that made you special start to feel risky or off-brand. Suddenly, you’re optimizing for what feels safe rather than what actually works. --- This was one of my favorite insights from one of my (anonymous) teammates in our latest newsletter on how startups lose their edge. Read the full thing here → https://lnkd.in/gUweGyeM
-
-
PostHog reposted this
PostHog AI is HERE OFFICIALLY. Over a year of building it, we've learnt a thing, or two, or EIGHT about building agents: (full blog post linked in comments!) 1. Watch out for the bulldozer of model improvements -- Model capabilities change gradually, then suddenly. Reasoning and tool use reliability have changed the game in retrospect, and we are nowhere near the end of the road 2. Agents beat workflows -- Graph-style workflows are mostly too rigid and can't self-correct. Turns out a simple agentic loop with tools works amazingly well 3. A single loop beats subagents -- Subagents lose context at every layer of abstraction. Keep it simple, Claude Code-style 4. To-dos are a super-power -- A straightforward todo_write tool keeps the LLM on track across many steps by reinforcing what the next steps. That's it 5. Wider context is key -- Agents need core context - a CLAUDE dot md equivalent. The /init command is a fantastic pattern to get that core context in 6. Show every step -- Humans want to see the full reasoning chain and tool calls to trust the output. Transparency builds confidence 7. Frameworks considered harmful -- LLM frameworks lock you into fragile ecosystems that become obsolete quickly. Stay low-level 8. Evals are not nearly all you need -- Setting up realistic test environments really is harder than building the agent. Analyze real-world usage all the time Going from the first hacky demo to a genuinely useful agentic product has been a journey. I'd repeat it in a heartbeat… only much quicker now. Sometimes you do have to first vibe your way through the whole Gartner hype cycle though! Check out the links below 👇 And try out doing your research and product setup with PostHog AI now! Thoughts on the radical poster by Cleo Lant and Lottie Coxon?
-
-
PostHog AI hit GA today. Since we love transparency (and charts), here’s a peek at who used it in beta. - Engineers lead the pack (no surprise there) - Founders and product folks aren’t far behind - Leadership, marketing, data, and sales use it too See what it can do for you: https://posthog.com/ai
-
-
PostHog reposted this
The best AI engineers have a secret to getting the most out of tools like Cursor and Claude Code: They stack up a collection of examples situations where AI is and isn’t useful. This prevents them from wasting time, energy, and tokens getting AI to do something it’s not good at. Recently, I researched how our engineers at PostHog think about this. Here’s what I found. Things our team thinks AI is good at: - Autocomplete. Everyone loves hitting tab repeatedly to get their work done. - Adding more versions of tests and fixing them. - Rubberducking. Asking deep questions, learning more about the codebase and context. It’s easier to understand any file or function with an LLM. - Doing research. Less Googling for StackOverflow answers. Things our team thinks AI is sucks at: - Writing code in a language it is unfamiliar with, like our internal version of SQL, HogQL. - Using the correct (or even existing) methods, classes, libraries. It regularly hallucinates these and assumes their functionality based on their name (rather than their contents). - Following best, up to date, and existing practices. It often uses deprecated APIs for example. - Writing tests from scratch. There are so many bad examples of tests out there that LLMs often churn out the same bad tests. Identifying what AI is and isn’t good at also helps you at a meta level. It stops you from falling into the pitfall of doing easy things AI can do instead of the hard (and important) things that maybe it can’t. To learn more about how to make the most of AI tools as an engineer, read my post on AI coding mistakes to avoid → https://lnkd.in/gmMwb_7t
-
-
PostHog reposted this
PostHog AI helped the whole Ray team get in on the data warehouse fun. Check out how we used PostHog to test & learn https://lnkd.in/eJSPicKg
-