IDEA 003

No NVIDIA Chips Required – Simple AI is Still Powerful

While the paradigm shifting LLMs are in their infancy, I’ve worked on the edge of machine learning and AI for my career. Time and time again, I found the power of simple models that were easy for a human to understand, just too time consuming to do.

At Crowdpac, we gave each political candidate an ideological score. A shocking number of people spend no time researching candidates before voting, and seeing a sign for a candidate on the drive to the polls can be the deciding factor. If you were only going to spend a minute on your research, our score would give you a good sense of where you candidate stood. Locally, endorsement groups dig into each candidate and learn their position, but this wouldn’t work at scale. We also saw that there was a disconnect between what someone said they would do and what they actually do, so this method was less reliable for first time candidates. In the end, one of our founders developed a simple algorithm based on this premise:

There are few political donors who give to a lot of candidates. Those that do are focused on shaping policy to their liking, so generally two candidates who have the same donor will be ideologically similar. If you take all federal donations over the past 30 years and start mapping those relationships using a nearest-neighbor method, different donors will pull candidates left or right until they ended up placed on a -10 to 10 scale. The model didn’t know anyone’s politics, but at the end you just had to look at the most extreme people on each side and say which one was was liberal and which one was conservative.

At Tchrvoice, I began by thinking that having a lot of data would help teachers perform better. I was particularly interested in bloom’s hierarchy of learning and thought if you could categorize the types of questions a teacher asks, you could push them to ask questions that better facilitated learning. However, throughout my time as an Assistant Principal, my most common piece of feedback was to talk less. And once I was a founder running demos, I uncovered that teachers didn’t believe that piece of feedback because they thought there were talking a lot less than they actually were. They would guess they were talking 30-40% of the instructional time when in reality it was 70-80%. When they heard feedback, they’d discount it because their AP was only in there for a brief moment, and it was just a coincidence that they were talking a lot at that point. The most impactful tool was also the simplest. Get a class transcript and show how much time the teacher was speaking vs. the students. By recording the whole class (or whole day), the teacher now couldn’t dismiss that feedback outright and you could move towards fixing it instead of debating whether it was happening.

I believe that advanced machine learning models, LLMs created with neural networks and otherwise, will shape the next 20 years and beyond. However, there are still so many areas of work and personal life that could benefit from taking a lot of data, running a simple model, and presenting it objectively to an end user. No NVIDIA chips required. It is a huge benefit that you can explain it to your user regardless of how technical they are. It can be a good entry point to gain user trust and if it becomes valuable you can add the power of these AI leaps to a more willing user.

ABOUT

Ethan Kessinger
Director of Product
New York, NY

0 to 1 product leader focused on running experiments and taking appropriate risks to create engaging products. view full bio