Creating an offline voice-enabled timer with Electron & Vue

Smart devices are surprisingly dumb at voice recognition; to understand our commands, they merely pipe microphone data to a technology giant’s data center for processing. This creates uncomfortable privacy implications of a direct microphone connection from a private residence to a massive corporation. Smart home skills are useful, so what if we could recreate that experience locally, and keep our sensitive data to ourselves?

The voice timer lights up after hearing the wake word (“Computer”) then processes “Start a timer for three minutes and two seconds”

A voice-controlled timer is a great smart home skill that to have around. In the kitchen, you may have your hands occupied, or covered in flour. Let’s build a proof-of-concept timer app with Electron and Vue…


React is arguably the most popular GUI framework for the web. The advent of WebAssembly and the Web Audio API — combined with cutting-edge, miniaturized speech technology — can enhance web apps with a Voice User Interface, or VUI, powered entirely within the web browser itself. With a handful of npm packages and about ten lines of code, you can voice-enable React, creating a proof-of-concept that understands natural commands and requires no cloud connection.

Technology used for the demo, ordered by decreasing amount of purple

Start up a new React project

Create React App (CRA) is a popular starting point for building apps with React, and a natural choice. …


The Web Speech API contains powerful functionality that allows you to transcribe speech in the Chrome browser. However, it is not a suitable choice for always-listening functionality. In this tutorial, we’re going to make a quick Angular app that uses a wake word to trigger the Speech API for a completely hands-free voice interaction loop.

Technology stack logos for the demo

The Web Speech API, by its nature, is not always-listening (nor would that be practical: when it is active, it sends a continuous stream of microphone data to Google). To trigger this API with voice, we need a separate wake word detector (also known as…


This article continues our journey of creating the voice interface for the food replicator from Star Trek, started in Part I.

In Part I, we set up a microphone, created a new NodeJS project, and used the Porcupine wake word engine to detect the trigger word “Computer”. Now we’ll handle the follow-on command: “Tea, Earl Grey, Hot”.

Running the “Replicator” Voice AI in NodeJS

If you wish, you can use Porcupine to wake up the application, then forward all the subsequent audio data to Amazon or Google’s cloud services. Instead, we’re going to create a Speech-to-Intent context: a bespoke speech model that is tuned for this purpose…


Articles on speech recognition have no shortage of Star Trek references. Indeed, in 2017 Amazon added the famous “Computer” wake word to Echo devices as an alias for “Alexa”, in a nod to the legendary television and film series. In 2021, it’s now possible to do recreate this experience on commodity hardware that processes voice privately and entirely offline. Let’s recreate the replicator, where Captain Picard orders his usual beverage, in NodeJS.

Private Voice AI understanding the captain’s beverage order, in NodeJS

The first step is the “Computer” wake word, or hotword: always-listening voice commands that serve to trigger a device to do something, including listening for subsequent (and typically…

David Bartle

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store