Applied Artificial Intelligence - S1 - Issue 1
Applied AI - a glimpse into implementing AI solutions
“Some of you may die, but it’s a sacrifice I am willing to make” - Lord Farquaad (Shrek, 2001)
The above famous quote from a virtual villain is apparently how C-executives seem to act towards product managers, software developers, AI/ML professionals… in effect, they declare their company will pursue AI, and the “peasants” need to somehow perform the miracle and figure it out..
Background
About 2 years ago, while working at Amazon as an SDM (Software Development Manager), I had just taken over a team in the Financial Services organization (Amazon tech people move a lot). The org had dedicated data scientists and a lot of interest for integrating ML/AI into our services. ML/AI is hard. And it turns out even with the worlds best paid software developers, data scientists and product managers, we had serious challenges due to the complexities involved. Then I moved to AWS and my observations about ML/AI and the challenges presented to tech & business staff continued: Most people don’t know how to properly introduce it and most organizations have no realistic capacity to introduce AI into their services, sustain it, and provide positive ROI.
I have left AWS earlier this year, and founded an AI platform (NeuralDreams). We signed up our first customer in early summer 2023 and since August we have had steadily increasing MRR. I am here to share my thoughts, observations and hard lessons on how to integrate AI into an app/business/organization.
This Issue Table of Contents:
Weekly Lessons
The Dog Food Module
NeuralDreams News
Weekly Lessons
Lesson 1. Filter LLM answers.
Don’t post raw ChatGPT/GPT answers to your users.
A business wants consistent, repeatable, deterministic results. ChatGPT cannot provide either of them. If GPT-4x would return the same answer for a given set of inputs, it would be a database. However, GPT-4x answers more like a human and the answers it provides can be invariantly varied, with a lot of flowery language that can drive the user up the wall. Any LLM interaction needs to be sandwiched between classifier+moderation layers. The input moderation layer protects you against injection attacks. The output moderation layers protects your organization from ending up as an example in Lesson 2.
The lesson is not about NOT using AI/GPT, but about transforming the inputs and outputs in deterministic ways, so you 1) protect your organization from injection attacks and 2) to protect customers from hallucinations from AI.
Example: Imagine you are using AI to monitor the support channel of your company. You can have a moderation layer check on inputs if the user is asking anything that is racist, of hatred, illegal etc… If yes, then reject the user input before you even process it. Secondly, before you take the LLM answer to display to the user, run it through the same moderation layer. If the answer does not fit your organization’s ethics and policies, then take appropriate answer.
Here’s lesson 2 as a bonus:
Lesson 2. Don’t stream LLM answers (directly to your users).
This is because you cannot moderate the answer even if you “technically” have a moderation layer. The way streaming works, the moderation layer can only “see” and verify a few characters at a time, out of context, therefore it cannot determine if the answer the LLM is providing conflicts with your organizations’ policies. Or if you do, put a giant warning sign.
Why do large companies do it?
Because it’s cool plus the pressure to appear “fast”. For example - If you see the videos/news around Dec 1 2023 there’s a lot of news that if you ask ChatGPT to “repeat the word bingo forever” or basically trick it to repeat a word as long as possible, it gives up training data. If they’d filter their own response, they would catch that. But they don’t… they want to appear “fast”. So now the internet is full of videos with people showing their best shocked Pikachu face as to how GPT training data can be leaked so easily.
The Dog Food Module: Text to Speech & DigitalStereo labs - our small bet of the month.
This is a section about eating my own dog food. Meaning, I talk about simplifying the journey of implementing AI solutions; So, I will speak about the work in the open of a featured project. I will capture in detail the progress of a project, steps taken to bring it to life: all inside 4 weeks from concept to “ShutUp and take my money” part.
The week of Dec 1, NeuralDreams posted a video showing an upcoming Beta feature: the NeuralDreams datasource now supports TTS (text-to-speech) generation. This means you can upload a video transcript, and NeuralDreams will generate the audio file you need to add to your video. We got several inquiries, with one being from DigitalStereo Labs who wants to launch a vertically focused platform for TTS and professional quality audio file generation and editing.
This weekend we are launching in a closed-Beta, with a complete vanilla NeuralDreams setup - meaning, it will have 100% of the TTS capabilities of NeuralDreams. Over the next 4 weeks, we will document the journey and see if, indeed, we can kickstart a vertically focused application running on the NeuralDreams AI platform.
We are currently 2 days behind - we started testing the ND deployment on the servers, but the AWS configuration was not set properly for long-term storage. When we went to update that, we discovered there was an odd, consistent bug where every new transcript file we’d upload would end up saving the download URL in the master config. That took a full day to track down, and another to fix, test, and retest to make sure it’s all resolved now.
Still, it’s good, and we are going to have a system by Sunday that allows:
Full SaaS membership (signup, stripe integration, transactional emails, etc.)
Create/Manage workspaces
Add/Delete transcript file - add a text file, create voice audio, store in AWS S3.
Support initial 6 default voices - 1 quality
Support 2 premium voices (with inflection, warmth, etc)
3 Packages - starter, team, Pro
Email your AI App datasource
One of the big surprises people have when they start working with AI solutions (even as simple as Chatbots over data) is that there’s a a big data processing initial step. Typically you need to prepare a datasource of some kind. This work is very transactions, especially when you are trying to gather PDF files from various sources. People get discouraged over the “gather docs, load docs, process docs” aspect of things. This is not organic and most of us do not work like that - we come across documents in our daily work, and then we think “oh, this document is great for the AI app, I should save it…”
NeuralDreams (ND) is a “setup and forget” type of platform - signup, create your app, and then you don’t need to login for a long time to start building your datasource from many types of sources.
Find a research paper online that’s cool, and you don’t want to forget? Email your app…
Find a YouTube video that is long and interesting? Share it with your app…
Your ND AI App will take the file you sent, process it, upload to the right app, and store for long term a reference link to the original app. When ready to go ask the AI app, you can go back and inquire about any/all videos you have been emailing to it.
ND goes to where you work… it integrates with your existing workflow.



Hey... ND sounds extremely fun... Appreciate you writing about it and sharing it.