H1 2023 update: progress and plans

We typically send updates every 4-8 weeks. All past updates can be found here


As we finish H1, we wanted to write a bit more detail about our approach, progress, and plans. This update contains:

  • The big picture

  • Phases of work

  • Our status at the end of H1 2023

  • An overview of the team, investors, and advisors

The big picture 

The problem: People are unhappy.

The opportunity: In the last 5-7 years, several thousand advanced meditators in the US have started reporting a “life-changing” ability to enter blissful meditation states on command known as jhanas. Doing so has solved opioid and heroin addictions, saved thousands of dollars on drugs and therapy, is extremely pleasurable, and makes it easier to live into values-aligned behavior like selflessness and delayed gratification. Brain images from institutions like Harvard, Oxford, Berkeley, and McGill corroborate accounts of extreme pleasure, and suggest these states are detectable. 

The challenge: It’s very hard to learn these states, often requiring 100s of hours of guesswork.

The solution: Biosensors that (a) monitor progress and (b) give formative feedback to make it 100x faster to learn these states. If successful, this would be a dramatic (“life-changing”), non-pharmacological, and nonaddictive improvement in wellbeing. 

Why now? Consumer-grade EEG and other biosensors are newly cost-effective, advancements in deep learning allow for new pattern recognition in biodata, and breakthroughs in neurofeedback-assisted meditation suggest meditative states can be taught in record speed.


Phases of work

To build biofeedback for these states at scale, we think about our work as having three phases: detecting progress, building feedback, and going to scale. 

Phase 1 Detecting progress (~1 year, pre-revenue):

We need to be able to detect and measure progress towards the jhanas using biosensors. To build this, we are collecting labeled data across various biosensors from expert meditators, and are training ML models to predict when someone is or is not in jhana.


Uncertain progress is one of the biggest reasons people give up too soon or never attempt to learn or teach these states. We plan to offer these measures of progress to design partners – retreat providers, meditation centers, and neurotherapy clinics. 


As an example, imagine we’re able to accurately detect someone’s progress towards the jhanas via thermal cameras, as we can in ~40% percent of expert meditators by watching their hands heat up.

Suddenly, for the first time ever, teaching meditation could become like teaching yoga: during class, instructors watching thermal cameras could lean down and whisper in someone’s ear that they’re headed in the right direction, or otherwise adapt their instruction.

Phase 2 Feedback (~1 year, with revenue): 

Still, we see a measure of progress as just a start – just enough to land our earliest customers and most enthusiastic design partners. We aspire to use biosensors to also instruct meditators and/or their teachers on what to do differently in real-time.


In Phase 2, we’ll iterate with our earliest partners to:

  • Design more effective instruction – once we have detection, we’ll be able to evaluate conventional and unconventional instruction, blending ideas from e.g. hypnotherapy, elite sports psychology, or emotional acting with traditional jhana instruction.

  • Offer feedback in real-time – we’ll tailor the music or guided instruction a meditator is listening to in real time based on biosensor data.

  • Pare back consumer hardware – we’ve been using neural imaging equipment that costs $70K per machine to evaluate our experts. As others have done before us, we’ll need to adapt our algorithms to work with existing, consumer-grade systems, most likely the Neurosity or Emotiv for EEG.

Eventually, we want to build a closed-loop, self-sufficient, real time system that adapts audio, haptic, or visual (e.g. VR) instruction as you meditate. After achieving detection, we think we’ll need a little under a year to build the first, scalable versions of a standalone product (e.g. real time feedback with no teacher required). 


Phase 3 Scale:

Finally, once we have a sufficiently exciting product, we’ll scale beyond our trusted early design partners to more rank-and-file retreat providers, meditation centers, and neurotherapy clinics. 


We’ll start at the very high end, where we’re likely to max revenue per unit, win influencers, and be best positioned to move downmarket.


Eventually, we’ll be positioned to go direct-to-consumer, at which point we’ll decide between a Peloton model where we sell units with a membership subscription on their own, or a SoulCycle model where we take on the running of premium retreats or meditation centers ourselves. Our estimates are that the unit economics of either are appealing.


We occasionally get asked if we intend to scale with FDA approval and seek clinical reimbursement. Initially, no. Winning FDA clearance and reimbursement is an expensive and slow road, likely requiring 4+ years. We also think the consumer market is ultimately bigger. But just like how the Apple Watch started as a consumer product and then sought FDA approval for aspects of the watch, so too might we seek approval and reimbursement when the time is right.


H1 2023 status: nearing the end of Phase 1

In Phase 1, we chose a single metric to guide all of our actions: cross-subject jhana vs. non-jhana baselines classification accuracy. If we could train our model on some subjects, and then accurately predict jhanas on different subjects, we would (a) have shown the problem is tractable and (b) be close to offering design partners a measure of progress.


To move the needle on our key metric, we need at least four things:

  • Sufficient quantity of data

  • Sufficiently expert jhana meditators labeling their experiences with sufficient expertise

  • Sufficient SNR from our hardware

  • Correctly chosen analysis


H1 2023, we set out to be world-class in all four of these areas:

  • Data quantity: By the end of May, we collected ~60 hours of jhana data from 28 subjects from three retreats (one of which we hosted on 7 weeks’ notice). EEG deep learning for emotion recognition benchmarks show 85%+ cross-subject classification accuracy on publicly available datasets with 15-30 subjects and 30-40 hours of data.

  • Data labels:  We recruited some of the most advanced jhana experts in the known western world, and used careful interviews to ensure our data was accurately labeled.

  • Signal quality: We negotiated the purchase of $300K of neural imaging equipment for $30K, accelerating our R&D roadmap by months.

  • Analysis: We built flexible ML infrastructure, iterated on our models systematically, and emphasized interpretable, classic ML early to build intuition and make progress before we had enough data for deep learning.


In May, just before collecting our last batch of data, our iteration paid off, and we saw cross-subject classification AUC_ROC (a measure of model accuracy) reach an average of 0.65. We’ll be integrating our new data and refining our models soon.


This puts us in a great position: we’ve overcome all four challenges of building detection far faster than we expected, showing the problem is tractable. We also did so using classic ML rather than deep learning, which our advisors and benchmarks suggest will substantially improve accuracy.


We’re not yet out of Phase 1 – we still need extend our model to our newest data, triple-check it’s not due to confounding signal, 


We’re now starting the earliest conversations with potential design partners – retreat providers, neurotherapy clinics, and meditation centers – about working together in Phase 2.


The team, investors, and advisors

At the end of 2022 we raised $400K, a bit more than our expected $250K, including from angels Nick Cammarata, Adam Ludwin, Coyne Lloyd, Max Bodoia, and Winslow Strong. 


Around the same time, 150 neuroscientists and ML scientists applied to the company, and we made two additions to the team: 

  • Alex Gruver came on as a cofounder. Alex did ML research at Harvey Mudd, passed on Facebook and other SWE offers to go on to early promotion at Bain, and most recently led GTM at Zoox, which he helped sell to Amazon for $1B. I’ve known Alex for years – he’ll be a groomsman in my wedding – and working with him has supercharged our efforts from running retreats to ML prioritization. 

  • Tamaz Gadaev, our lead ML engineer, brings prior experience leading multiple ML R&D roadmaps, including for biosensors like the Samsung Galaxy Watch. 


In H1 we also made almost weekly use of advisors from OpenAI, Kernel, AEStudios, and System2 Neurotech. We were particularly excited to bring aboard Rob Luke, the Head of BCI of AEStudios and key contributor to the Blackrock Neurotech human-BCI decoder; and Graeme Moffat, the former Chief Science Officer of Muse, the largest consumer neurofeedback device on the market.


Kati Devaney moved from heavy part-time support in H2 2022 to couple-hours-a-week as Chief Science Advisor in H1 2023.

Previous
Previous

Navigating paradox: meditation can and should have goals

Next
Next

Why aren’t the jhanas more popular?