John Floren

Home | Blog | Uses | Links
Back to blog archive

Posted 2024/2/20

Thoughts on the Brilliant Monocle

Table of Contents

  1. Physical Design
    1. It’s pretty bright
    2. It’s pretty light
    3. It’s not too bad to look through.
    4. It does take some fiddling to get the right position
    5. The plastic already broke
  2. Programming
    1. It’s not too hard to program
    2. It is essentially a dumb terminal
    3. The Noa app is terrible
  3. Company stuff
    1. Communication is kind of crappy
    2. I’ll believe the videos when I see the real thing
    3. The Monocle will probably be abandonware
  4. Community stuff
    1. Misunderstandings about capabilities
    2. The AI pivot
  5. Conclusion

A couple weeks ago, a kind soul gave me a Brilliant Monocle, a device I’ve been curious about for a long time. I haven’t had a lot of time to play with it, but I’ve done a bit and wanted to note some of my first impressions.

Physical Design

It’s pretty bright

I haven’t had a chance to take it outside in full sunlight yet. It’s been overcast all week, basically.

And yes, people can tell there’s stuff being displayed from the other side:

Update: At last a sunny day in South San Francisco! The display is more or less completely unreadable in full sunlight. It’s acceptable if the sun’s behind a little haze of cloud, and it gets better if you can look at a dark surface like asphalt.

It’s pretty light

It really doesn’t drag down the glasses at all, and it stays in place well enough.

It’s not too bad to look through.

It’s kind of like a motorcycle helmet: the area where you mostly look is clear, but there’s stuff at your periphery you can see if you look for it. It’s not a great match for the two portrait-oriented monitors on my work desktop, because the screen of the Monocle gets in the way of the bottom of the monitors – and there’s a very visible difference in the amount of light coming through the prism portion when you’re looking at it in front of a bright screen.

It does take some fiddling to get the right position

There’s a small range of positions on my glasses where the image looks crisp. Everywhere else is some level of blurry. I also had to fiddle with the positions of the nose pads on my glasses to be able to get a sharp image. For this reason, I’m a bit concerned about the Frame: have they figured out a way to get the display in a one-size-fits-all location, or will 50% of people see only blurry images? Their hipster doofus round glasses don’t appear to have any real options for adjustment, either. Update: according to https://docs.brilliant.xyz/frame/frame/, there will be “nose bridge adaptors” you can switch around to get the right position, which ought to do the trick for most people anyway.

The plastic already broke

Since I got LASIK done years back, I’m just using some cosmetic “blue light” glasses from Amazon. The top part of the frame is kind of thick, and after putting the Monocle on & taking it off a few times, I broke the plastic body where the clip attaches to the rest of the device. It still works, there’s just a little separation.

Programming

It’s not too hard to program

I started with https://github.com/milesprovus/Monocle-QR-Reader and hacked around on it until I had something which built up a queue of messages on the device which could be scrolled through or immediately dismissed. The challenging part of programming is deciding how much logic & data lives on the device vs on the controller: should the controller keep a list of messages and scroll through them, or should the controller receive button presses and send a new page of text to the device?

It is essentially a dumb terminal

The Monocle has a microprocessor inside, but so does your keyboard. It is a peripheral controlled by a beefier machine, not a standalone computing device. Listen for button presses and send them to the controller. Take a picture and ship it over. Display text or shapes drawn by the controller. Consider it on the level of the VT100: it’s got enough brains to configure itself and show what you send it, but that’s about it.

You can comfortably display 8 lines of 26 characters each. Near as I can tell, you’ll need to split the text yourself, breaking it into 26-character chunks and drawing each line separately (offsetting each line by 50 pixels works well).

The Noa app is terrible

They ship Android and iOS apps called “Noa” which turn it into an “AI assistant”. Basically, you touch a button, it records a few seconds of audio, then ships it to some cloud service for transcription & sends the resulting text to OpenAI, then displays the response on your screen. If you hold a button, it’ll take a picture and then listen for your audio, supposedly so you can tell it how to modify the picture, but I’ve yet to see it produce anything worthwhile.

Has a funny habit of suddenly shifting the display so the right half of the text shows on the left side of the screen and the left half shows on the right half, until you reboot the device.

Took me some time to realize it installs a main.py on the device, which overrides whatever you try to send it interactively. You have to delete the main.py file at the REPL.

Company stuff

Communication is kind of crappy

Information flows through Discord, which is largely manned by one person who seems to be associated with Brilliant but maybe not an employee? Despite asking over and over again, I was never able to get anyone to answer conclusively if they thought the Monocle or the Frame would work very well in full sunlight.

Because Discord is terrible, the same questions get asked over and over again – which may be lucky, because all you have to do is ignore any pointed questions and pretty soon somebody will come stumbling in to post some babble about how excited they are for AI SUPERPOWERS and the questions will scroll to obscurity.

I’ll believe the videos when I see the real thing

Videos for the Frame show the user looking at an object and asking “what is this”, “how much of this can I eat”, etc. See here for an example. This functionality does not exist in the Noa app at this time. I’ll believe they’ve actually accomplished it when somebody outside the company demonstrates it. Until then, I consider these to be mockups.

This has been a thing from the beginning: even in 2022, their site promised features which to my knowledge have never been built and in the case of “Instant Replay” were not possible with the hardware: https://web.archive.org/web/20220701105824/https://www.brilliantmonocle.com/. Update: the following information comes from Discord regarding the instant replay feature:

josuah
 — 
Today at 5:30 PM
"replay" was one of the first feature of the Monocle, before MicroPython even started to be used. 
This is a feature used by the FPGA, and is still available today as part of the StreamLogic platform, 
that allows to customize Monocle hardware features:
https://streamlogic.io/ - https://streamlogic.io/docs/reify/ - 
https://github.com/sathibault/streamlogic-monocle-micropython - 
https://fpga.streamlogic.io/monocle/ - https://pypi.org/project/sxlogic/
For instance, for replay: https://streamlogic.io/docs/reify/nodes/#fbout
It is a bit technical to use that, and requires of experience with build tools as well as a bit of custom python, but it definitely works.
There is one last step to re-enable the feature: convert back from JPEG to RGB, which will need this to be released:
https://github.com/brilliantlabsAR/monocle-micropython/pull/281
The zoom also works but was not released/used insofar.

The Monocle will probably be abandonware

They revealed that the Frame will run Lua in its firmware, rather than Micropython like the Monocle. I’ve asked multiple times if they’ll port that firmware to the Monocle too and haven’t heard anything more definitive than the AMA saying “ask in the Discord” (which is where I had asked, and got told it’d be answered in the AMA).

Community stuff

Misunderstandings about capabilities

Over and over again, you’ll see people come stumbling into the Discord and asking the same sort of questions about the Monocle and especially the Frame:

Basically people want this to be a Microsoft Hololens that’s also an Apple Vision Pro but it only costs $350 instead of $3500, and they have to slowly realize it isn’t by querying other users who asked the same questions yesterday. If only there were any way to communicate beyond ephemeral chat messages…

The AI pivot

Throwing AI into it (like every other company in the world because that’s the only “sure” way to tap dwindling VC money) has attracted all the AI grifters into the Discord and it’s obnoxious.

Conclusion

Despite the cynical tone, I’m actually kind of excited to see how the Frame turns out. I hope they manage to pull off everything they promise this time, because it’s more or less exactly the peripheral I want for a homebrew wearable computer.