SaaS Founder Interview with Richard Boyd, CEO & Founder of Tanjo

Tony Zayas 0:06
Hey everybody, it’s Tony Zayas. Welcome to another episode of the tech founders show, where we talk with exciting tech founders who are really leading innovation around the bleeding edge of technology and innovation are changing the way we work the way we live the way we play. As always, I’m joined by Andy Halko. Andy, looks nice out there. How was your Fourth?

Andy Halko 0:32
Fourth was great. I figured it’s so nice and hot outside. I join you from my state in the plains of Cleveland. So I figured it’d be a fun and good show to be outside today.

Tony Zayas 0:44
Yeah, awesome.

Andy Halko 0:46
Wipe off some of the sweat.

Tony Zayas 0:48
Yeah, it looks looks looks hot out there. Cool. Well, today, you know, we’ve had a few founders on this show talking about AI and machine learning. We’ve another one that I’m super excited to have on today, we have Richard Boyd. He’s the co founder and CEO of TANJO. TANJO AI, and where they’re teaching people how machine learning and automation are changing our industries and building solutions to keep organizations ahead of the curve. So I’m super excited to hear more. I’m gonna bring him on Richard, how are you doing?

Richard Boyd 1:22
I’m great. Contrast to Andy, I’m in my basement layer that I’ve been hunkered down in for the last 18 months. But I’m starting to go above ground too, just like Andy getting some sun somebody like the groundhog, I figured I’d come out and then I’ll head back in. So outstanding.

Tony Zayas 1:42
Very cool. Thank you for for joining us. If you can just, let’s cue this up. Maybe if you tell us the origin of TANJO. Where did the concept come from? What is you know, how did it manifest and, and tell us a bit about the business?

Richard Boyd 1:57
Sure, yeah. give me the Reader’s Digest condensed version, although it’s sort of a 30 year compressed story. My I’ve been very fortunate to work with the same team of really smart people. I try to collect smart people like yourselves around me, as I tilt the windmills of technology out there. But I’ve worked in the film industry. I’ve worked in computer games, my co founders here. And I helped create game companies like Red Storm entertainment with Tom Clancy and companies with Michael Crighton, and science fiction writer, Douglas Adams, and Ozzy Osborne. And only one of those four still alive. So I’ll let you guys ruminate on that. But also an aerospace, Lockheed Martin 100 year old aerospace company, bought my last company where we were working with this gaming technology, which had AI as a component, right? If you’re going to make convincing worlds with convincing characters inside, you must have mastered what was the old form of AI, which is really just rules based engines and behavior trees and finite state machines, all that stuff. But while I was at Lockheed Martin hung around longer than I expected, you know, normally, if you sell a company to a big company like that, they put these little gold pots out every six months to keep the key personnel in place. So I thought, well, I’ll stay here two years until the gold pot pots are all gone. But stayed six years because it was really interesting. I created a group there called virtual world labs that worked in AI, VR, and AR, and we were just busy coming up with technologies, we found out every time you file a patent, you get a check. So we just started inventing stuff in that field. And it was then that I discovered machine learning. And that’s really what brought me to where I am today. I knew machine learning existed already, of course. But around 2009 I got invited out to um, Microsoft Research Labs in San Francisco with Alex Kipman. The inventor of the Microsoft Kinect system, if you remember that. I don’t think it’s around anymore. But anyway. And Jaron Lanier, the guy who came up with the term VR, right, you met seen out you probably have already interviewed him, you guys. If you haven’t, you should. He’s weird and cool, interesting, smart. But yeah, so is there I realized that, you know, this really is more than a new on the calculator. It’s something that machine learning is a new way to solve problems. That’s why we have you know, automated vehicles today at the level they are and other kinds of technologies. So shortly after playing with that at Lockheed, I jumped out and with my co founders formed this company to take it to the rest of the market. That was 2014.

Tony Zayas 4:50
I would like to ask and you know, there’s a lot to go into with, you know, your, your past and kind of evolution but I would like to hear just early on here before we dive more into the conversation on machine learning and AI. What, What are the what’s the biggest, like progress that you’ve seen in that field in your time? Because you’ve been at this for a while.

Richard Boyd 5:15
Yeah, well, I’d say, you know, the, just the, the methodologies we’ve built around how we train the systems, because of course, with your audience, I’m sure they already understand the difference between the old AI and the new AI. The old AI was, you know, you couldn’t, you don’t start telling the computer what to do until a human understands it or a team of humans. And then we sit down and put it into some kind of brittle logic and try to tell the machine what to do. But it turns out, we don’t know how humans drive cars, we don’t know how humans do you know how we do a lot of other things. And, you know, this new methodology of teaching a machine to play Atari Games, or go or anything, just from a large set of examples, that’s a new way to solve the problem. But when we worked on the mic, when I was out, you know, viewing the Microsoft Kinect project, I shouldn’t say I worked on it, I was admiring it, let’s call it that. You know, it was like 224,000 hours of processor time to teach a system what a living room was. And there were millions of examples of what is an Asian living room or a South American living room or a rural versus urban Riven living room look like? What’s the difference between a cat and a dog and a lamp and a chair, stuff that we think back if I know what those things are, the machines were so really, really awful at that. We didn’t have good label datasets. But today, you can let let a machine learning system basically watch 100 hours of video about driving a car, and it can go out there and just drive. So the the, you know, better labeled datasets. But also, I think some of the methods, you know, with with layered neural nets to to process and learn hierarchical systems of learning that work more or less like the way we think the brain works, is allowing us to have breakthroughs at a greater pace than any of us predicted. And I think that’s one thing that that I’ve learned from my 30 years working in these technologies is, even back in 2015, a lot of us were surveyed about when do you think machine will win that, you know, Texas Hold’em poker, or some of these big MMO games like Starcraft that require diplomacy and deceit. And we were like, Oh, that is 10 years, 15 years off. And it all happened like in rapid succession, like the game of Go thing I think was in 2015. And then Starcraft and draw poker were like 17. And it’s just all happening way faster than any of us predict. So and those are experts trying to make predictions, which I think is interesting. I think we experts are really bad at predicting. But it’s happening fast, and it’s accelerating. So that’s why conversations like this are important, I think.

Andy Halko 8:06
Yeah, I think I actually heard a podcast about how experts are so bad at predicting things. But you know, speaking about breakthroughs I’m interested in, you know, what were the turning points. What in the last five years was, I don’t know, a discovery, a breakout, a technology. That was a turning point. And then I’d be curious where you think the next big turning point is?

Richard Boyd 8:31
Yeah, so it’s pretty clear for us. You know, what, while I was still at Lockheed, the Secretary of Education, Arne Duncan, walked across, you know, across DC, I guess, from Maryland, or wherever the heck he is over to Lockheed Martin and said, Hey, we have this big thorny problem do you have like, innovative people who can help us solve it, we have, like, think about everything in the Library of Congress and everything at the Smithsonian. And we’re going through this digital transformation, we’re digitizing everything, all the, you know, where the Ark of the Covenant is stored, and stuff like that. We’re scanning it 3d scanning stuff, or 2d scanning documents. And we’ve got experts labeling it. So it can be discovered by people later, but it was just it’s way too, taking way too long. So how can you create something that can learn from what’s already been tagged, and learn how to auto tag things going forward with some kind of, you know, a level of confidence assigned to it. So we built a system to do that. And our breakthrough, there was just understanding that, you know, most humans, again, breaking away from the human way of thinking about things and trying to solve the problem more like a machine would, right machines have a massive amount of storage, massive amount of bandwidth available to them today, especially distributed systems, so they don’t really deal with tasks. They don’t have to deal with tasks with the same constraints that humans do. Right. So, you know, if the human read Warren Piece, right, you might say, well, this is a Russian novel by Tolstoy, It’s about war. It’s about love. You might even stop there, right? And it’s really long. But machines can tag it with way more things. So we came up with a system. First of all, we use 999, weighted conceptual tags. So that you know why 999? Well, that was, that’s the number of classifications in the Dewey Decimal System. So we just figured, okay, that’s all human knowledge. But now the system is has expanded it to 4000. So we think of that as a thumbprint, that sort of hyperdimensional fingerprint around that thing, where it can be a Grecian urn, or Warren Piece, or any collection of documents with people working on it. But that fingerprint allows us now to to surface things much more quickly and map it with or connect it with a higher degree of confidence and fidelity than we could with stuff that was tagged by humans. And a lot of the labeled datasets that we’re still training our machine learning systems on today are hand labeled by humans, we need to break through break past that. And we already have, I think, to where the machines auto tag and their tagging with these really, really large over I mean, they’re way over doing it, right. But it gets really, really interesting when you get into semantic language problems and different sort of contextual areas. So that was a big, hopefully, that that’s meaningful enough for your audience. But that was a big breakthrough. For us. It’s just that one little step of like, hey, we don’t have to tag it with seven things. Let’s see how many of the machine wants to tag it with. Let’s go with that. Then the next breakthrough. And I’ll just add one more that that’s been really interesting is we figured out that the way we’re remapping like documents at the Library of Congress and stuff like that, for the learning registry, the machine could not tell the difference between a document or an object or a person. So we started getting really interested in that fact, we almost made a dating app early on, like 2014 15. Because we let our system just crawl all over OkCupid. Back before everything got locked down, right. And basically, it came up with these little Myers Briggs like models of all the people on the site. And now you just got to figure out, okay, what kinds of people and physical traits and everything connect better with others, and at the time, there was no machine learning or AI in dating apps, we just decided that’s not what we wanted to do with our time and energy and focus, and treasured blood. But it was really interesting. So one of the things we do today, though, with TANJO, is we make animated personas of people for market research. We just got an we were finalists for the X PRIZE pandemic response challenge around modeling human behavior. So if you’re in the state of North Carolina, and you, we can figure out what are the largest groups of people here, what prevailing, you know, values, traditions, feelings? Do they have who are their influencers? And how do we move them to the desired behavior of using protective equipment, social distancing, getting vaccines, you know, all that stuff? And that, so, to me, that’s one of the highest moral purposes of this tech, right is not just predicting the future, but hoping to, you know, create a better one.

Andy Halko 13:14
I’m surprised the number of contextual tags for the Library of Congress, Congress wasn’t 42. That is the meaning of life, the universe and everything. But

Richard Boyd 13:25
sometimes we superimpose that in honor of Douglas, who I miss him greatly. But I can tell you, there was one project we did for a chicken company, they were like, well, we basically have four kinds of chicken buyers. And we’re like, four, why don’t you have 126 million? Well, how about one model for every household in the US, because I’ve got all this amazing tech automation storage, I can build that I can tell you the the body mass index of every household in the United States, for example, what do you do with that? I don’t know. But we just tried to explain that, hey, we have these super powers now. So why do you only have six kinds of chicken buyers? They’re like, well, this is our market research. And we really had to make it workable. We’re like, Hey, let’s go see what the machine thinks. And that’s one of the processes we go through. We call it the new scientific method, right? Old scientific method is a human comes up with a hypothesis, you design some experiments you test against it. Well, what we do now is we throw all the data at the machine learning system, collection of brains, actually, and say, find every hypothesis you think is interesting. And then the human goes through and says, Oh, that looks cool. That looks cool. And then let it iterate on that and you find new, you know, new views, new perspectives on data that you’ve never we’ve never had before. We have this tech. So we ended up saying though, that long story short is there aren’t in fact, not a surprise. 126 million meaningfully different ways to buy chicken turns out, but it’s also not six It’s a larger number, we ended up drawing the line at 42. Because we thought that was a nice little easter egg insider thing. But it could have been any number between 42 and like 90. That’s great.

Tony Zayas 15:16
So I’d be interested, Richard in here and just more about the work that you guys are doing at TANJO. Looking at your site, it’s super interesting. I like the statement there. It says, humans are good at some tasks, computers and other we’re here to empower humans, machine learning, power decision support. So how do you guys do that? And what does that look like? Who are you working with?

Richard Boyd 15:38
Yeah, there’s a, I mean, we’ve got recently, we’ve got a lot of government contracts. And that’s where it’s getting incredibly interesting. But I’ll just say that the original, the originating sort of thesis statement was that in this century, the most critical problem everybody needs to solve whether you’re an individual, or your company, or your government, is how to strike the right balance between machine intelligence and human intelligence, what’s that balance, to sought to perform whatever task you’re doing. And if you’re not looking across everything constantly and making that decision, because it changes frequently, like every month, every three months, something like that, and like I said, is accelerating, you’re not going to be competitive very soon. So. So we’d like, you know, we couldn’t solve, we’re not trying to solve all the problems in the world, there’s lots of people working on Process automation, and things like that. We’ve done a lot of that initially. But we really enjoy using it to help people be smarter. So that amplifying intelligence bit. So we’re, for example, tying all 58 community colleges in North Carolina together with this one sort of all seeing all knowing brain. And the general idea there is anytime someone that you know, any of the 10,000 11,000 faculty of these 58 Different institutions, goes to goes to a conference or read something or collect some bit of new intelligence creates a new curriculum about 3d printing, or drone management or whatever, that because it becomes part of the collective whole the whole collective intelligence. And if someone starts working on something like trying to create a new curriculum, on 3d printing, this thing would automatically say, hey, it looks like you’re trying to do this, here’s a bunch of stuff that’s recently been added, and other people about voted this bit of technology or this information object has been really useful, grab it first, don’t recreate work that’s already been done. And what kind of, we’re still exploring, like, what kind of effect might that have on the GDP of the state to have that connected connective tissue, with all of the intelligence being gathered, so that organizational knowledge is not lost? Every time someone leaves or because you know, you’re you’re at a school in one part of the state, and you don’t realize there’s a, you know, you’re trying to teach marine biology and this the Cape Fear. Community College has a whole bunch of great curricula around that, and you didn’t even know about it, because you don’t even have the ability to search there. So we have ways of connecting the knowledge that also preserve privacy, and are GDPR compliant and all that. And people can actually be thoughtful about things that they want to contribute. But it’s not feeling invasive. So. So that’s one really interesting thing. And the other is this, again, this idea of taking the data exhausts that all humans leave behind out there, and turning it turning it into these models of people, which you can then put into simulations, really powerful idea, and again, harkens back to our gaming idea, you know, the stuff that we were doing in gaming, but really powerful and interesting. Again, if you’re trying to manage your way through a pandemic, and you’re the governor of a state, wouldn’t it be helpful to know how people really are thinking feeling? Who their influences are? What are their perspectives? And what are the sets of micro messages, I can deliver over a period of time to nudge them towards the better behavior through the conduits that make sense to them and don’t come in as this foreign, like, you know, you will do this or else, because that’s not how people unfortunately make decisions and govern their behavior. But that, to me, that’s really, really interesting. Is that is that modeling populations and tried to guide them along, you know, benevolent paths.

Andy Halko 19:24
Yeah, like that. They, you know, nudge where, if you tell somebody that nine out of 10 of their neighbors paid their taxes instead of, you know, threatening them with a fine, they’re more likely to pay because you find that out through data, right?

Richard Boyd 19:40
Yeah. Or find something they care about. And then and then start messaging there. I usually tell this would take me a while to tell but there’s a whole story around kale. I tell just to use a sort of neutral example. Six years ago, the number one buyer of kale in the United States was Pizza Hut and it wasn’t for eating. It was just for putting on the ice around the solid bar because it stays green a long time. They started using technologies like this. They created the American kale Growers Association, which had a membership of one, which was a PR firm and, and just started nudging people and figuring out like, who are, who are the susceptible populations who we can get to eat more kale? Who are their influencers? Oh, it’s Gwyneth Paltrow. It’s whoever. So let’s have Gwyneth Paltrow do a cookbook and get on Rachael Ray talking about next thing, you know, you know, people are walking into restaurants demanding massages, like like they do in Cleveland dandy. They walk in and they they demand massage Kale with a carbon neutral footprint and 15 bucks a plate right.

Andy Halko 20:38
All the time. All the time. So you you mentioned GDPR, GDPR. And intrusiveness. I’m curious. Do you think AI machine learning improve privacy? Or make it a little bit more exposed? I guess.

Richard Boyd 20:59
Yeah. I mean, like, so many powerful technologies we have. And we develop, you know, Marshall McLuhan talks about the tools we make making us it is a double edged sword. So it’s incredibly powerful. And I think that’s, you know, when I talk to policymakers, and it’s one of the reasons why I wanted to join the Atlantic Council, if you guys are not familiar with what’s going on there, it’s one of our Atlantic Council members was just on Bill Maher recently, and others, were really trying to the whole mission is, you know, you know, be be bold, be benevolent. And what was the other one gotten it anyway, is, is to really do good with the tech and nudge it all towards good behavior. That is, like I said, over, over, over all benevolent and has some thought around the policy because policy typically trails, any technology by 10 years. But when I talked to some of that, like the joint AI center for the military here, I always started early on urging them like, like, this isn’t like other tech that you’ve bought, it’s not like an ERP system, or, you know, Microsoft Word and Excel, things like that it is incredibly intimate, it only works well, when it has a deep understanding of your organization. So for God’s sake, make sure it’s inside your firewall in some fashion, even if that is a firewall that extends out into the cloud. But it has to be under your control. And you should be taking a lot of care with how it is trained, again, to avoid bias and other kinds of potentially malevolent outcomes, and then understand how it’s making its its predictions, how it’s making its judgments, and make sure that they’re consistent with your values. But if you’re doing that as a SaaS thing, and you’re leaving your data off somewhere, and you don’t, then it goes into a black box, and you don’t know what’s happening. That’s a huge mistake. So it’s really intimate, it’s really powerful. Therefore, that was a decision we made early on that was a differentiator is when we work with like the community colleges in North Carolina, we go, here’s source code. So this thing that we’re gonna take our original source code, we’re gonna modify it a bit to meet your needs, and then it becomes yours to manage. But you’re crazy. If you don’t have that source. If you don’t have that deep visibility into how the thing is working. I don’t understand people who buy machine learning you don’t get that.

Andy Halko 23:24
You another word that you said benevolent stuck out to me. And so years ago, something that just stuck in my head was the singularity. Yeah, you know, that you hit that point where machine learning AI matches humans. And then there’s just this huge offshoot of knowledge and, and the ability that to solve problems and invent and do all of these things. I’m curious, you know, your opinion on this idea of the singularity and where it is, and you know what that looks like, for us and humanity? If it’s, if it’s a real thing.

Richard Boyd 23:59
Yeah, you know, this, this actually started with Verner Vernor. Vinge II, in San Diego, right, who wrote about that first. And then it was sort of, you know, saying, basically, the technological singularity is the last invention humans will ever have to make or what we call AI plus, once you creative self improving AI, we will, you know, hopefully they’ll keep us around, right? And the reason it’s called a singularity is because beyond that point, we don’t, we can’t really make predictions, it’s already getting difficult to make predictions, as I pointed out earlier, then, of course, you know, Ray Kurzweil picked that up with the Singularity is Near a book that he wrote early on, and I gravitated towards him at that time, tried to collect him into my posse of really smart people started going to the singularity conferences and things like that meeting people there. And I you know, I came away convinced that it’s inevitable that we’re going to get there at some point. I and although I still have that problem, like a lot of experts to thinking that it’s further off because I’m working with it everyday and I’m going on It’s hard to believe that this thing I have, that I’m running over here is actually going to be sent in any anytime soon, right? But it is already self improving. And that’s where it gets weird. You know, the first time we, I took, I built a persona of Victor Hugo early on, I just pointed it, our system at everything he’d ever written everything written about him all of his private letters, everything. I also did my dad, by the way, when he died in 2017. So I’ve done a little speaking tour around him across South America and places around like this. There’s this version of my dad that lives on a server that I can go visit. And you know, I can talk to him now. And he talked back to me, he’ll speak with me, from whatever data is, is, is informing him. But early on, when we had Victor Hugo, we had a couple of weird moments where he started his interest graph changed and evolved. When we’re still contemplating the philosophical sort of implications of that, right, what does that mean? And we have, I can’t say the name of it, but we have some contracts with some acronym agencies here. In, in the United States with, with our partner NCI, up in DC, and we’re modeling like all the world leaders, right? So imagine taking a lot of data about world leaders, I can’t describe how it’s done. But then you get you end up with this, like animated person, which you can then put into simulation, you can say like, Okay, what if this happens? What if Turkey gets in, you know, invaded? Or what if, you know, the Ukraine, you know, what if we start putting nuclear missiles at the Ukraine, you know, how is he likely, how’s Vladimir Putin likely to respond. And it’ll give you this sort of Monte Carlo distribution of potential things he might do. And that’s really informative. And the better your models are, the more interesting it gets. And that’s the world we’re in now. It’s a it’s a very different little scary, and that’s why we need groups like the Atlantic Council, thinking about how do we keep this, you know, sort of bending towards a benevolent, you know, outcome better, you know, optimal outcomes for everybody, and not going off the rails?

Andy Halko 27:13
So how does what you’re doing? Good, Tony? No, go on Andy. Well, I was just curious how what you’re doing an AI plays into the deep fake world, too. I’m, I’m so intrigued by deep fake. And what’s possible, and, to be honest, more the scary parts of deep fake to me. So it sounds like what you’re doing touches that a lot, potentially.

Richard Boyd 27:42
Yeah, I mean, I think we really have to come up with countermeasures against items, things like that, again, what we’re trying to do and I can’t go into too much is, but you think about some of the issues out there like human trafficking and, and cyber warfare and those areas. Having a bunch of systems that can sort of detect and ensnare the bad actors out there, by behaving in a certain way is how we’re trying to use it for good. But can that technology be used to, you know, for disinformation, for what we really now call information warfare? Absolutely. And very successfully. By the way, it turns out, it’s not very difficult at all, to manipulate people on that, take that example I gave of kale, but take it towards like, believing that you really need to go into that pizza shop and shoot everybody inside because they have a child trafficking ring at the bottom of it. And that’s someone who got manipulated over time with micro message. It wasn’t one well crafted email. It was a bunch of micro messages that they slowly didn’t even realize they were adopting over time. Like, dark green vegetables are better for you. Yeah, but kale tastes bad. But oh, Rachael Ray has a recipe for making it taste better and Gwyneth Paltrow likes it. And then next thing you know, you’re going I should eat kale. You can’t even remember why you’re, you have that new position. When Iceberg lettuce was fine 18 months ago, but all of a sudden, that’s your new position. And you’ll are in some people will argue about like Why Why aren’t you eating kale Andy? You need to be eating kale. I was like, Well, why am I need to be kale? Because all these? I don’t remember, but I just know Right? And that it’s easy to do that. So we got to come up with countermeasures, and not be too squeamish about those countermeasures to make sure we don’t lose these information warfare battles.

Andy Halko 29:30
Well, one thing I always think is that, I don’t know, ego of Americans, people that I talk to is that we’re ahead of technology. We’re inventing these things. We’re gonna set the tone. But I always think like, all it takes is one bad actor out there a couple to really innovate way ahead of us and just, you know, do anything and cause anything they want. Is that something you ever think of as a real fear?

Richard Boyd 29:59
No, no, absolutely. And I think that looking at the International sort of distribution of work in this field, it is not US centric. In fact, I would say I don’t want to get in too much trouble here. But it seems to be weighted much towards parts of Asia right now than most people would be comfortable with. So we should have a lot of humility. It feels to me and I’m not the only one talking about this, that it’s almost like the Sputnik kind of challenge, right? That, hey, we’re behind in the space race. And we better catch up. So let’s make some really thoughtful investments. I think that’s where we are in the US right now, especially at the government level. That’s why some of the like, we just got a, like, seven year $800 million contract with GSA, together with NCI and lighthouse and others, to be thoughtful around machine learning AI and digital transformation. And so there are big investments happening in that space. And DARPA and other r&d groups that are truly doing real capital are real capital D, are spending a lot of time there. So I’m hopeful that we’ll be able to catch up at least maintain parody. But I do feel like even now, we’re behind.

Tony Zayas 31:23
That’s a super interesting parallels. You said, like, you know, Sputnik and the, the race to space, all that. Super interesting, just to think about the question that I am curious about is with all the both potential for good and for bad. And for, you know, the unknown, is you mentioned, like the Atlantic Council, who are the other players who is involved in the process of ensuring that, you know, you know, as a community, those of you that are involved in AI and machine learning, pushes things forward, yet stays responsible, and looks at the bigger picture implication of things?

Richard Boyd 32:10
Yeah, I think there are a growing number of organizations that are taking this really seriously, I know, I can’t think of the organization. But Joey Ito, who was the former director of the MIT Media Lab has an organization that’s focused on this idea of ethics and AI. In in the Wilson, there’s a Wilson council as well, that does work similar to the Atlantic Council. But I think we do need more attention and more focus on on the issues of ethics. And that and the question is, because it can be weaponized so freely, is it do other countries, and that state actors necessarily have the same concerns about ethics that we do? The assumption is that they don’t. But we don’t know. I think the scary thing is we don’t necessarily know. So it’s unknown.

Tony Zayas 33:11
Yeah, it’s interesting, I would love to hear a little bit you share with us before we went live, you have a manifesto, you put together the superhuman age manifesto, we’re going to get that out, you know, sort of, you know, viewers can take a look at that. But I would love to hear some of you know, your favorite points that are hit upon. But one thing that stands out to me just looking at it briefly here is you mentioned that AI is the new UI. I would love to hear you elaborate on that a little bit.

Richard Boyd 33:39
Well, I think that that’s something I’ve given a lot of thought having just, you know, I started working in technology in 1989. So it was very early, it was before the internet, which freaks people out my kids, at least when I tell them no, I started working in tech before the at least before the graphical internet was born, which I pegged at about 1993. And you know, there there have been just a tremendous number of changes along that time. But one of the evolutions that we’ve seen is this sort of evolution of the interface. And again, I’ll quote are all sort of harken back to Marshall McLuhan, where he said, you know, the things we make make us anytime we start using tools and building things, we end up adapting ourselves to those things. And so we learned to program and create batch commands and we got carpal tunnel syndrome from from typing on our keyboards because like, why am I typing on my keyboard? I told a story a couple of years ago about my daughter when she was about 18 months and this was very early in the touch interface when she was born in 2005. So before the iPhone was even out, right, and I caught her one time in the in the in the living room, and she was trying to swipe her hand across the TV. I’m like, What are you doing stop touching the rapidly depreciating assets. Would you do? And she’s like, I want to watch Barney. So she was trying to change the channel. And that stopped and said, No, no, no, here’s, first you have this. This is the remote control for the stereo, this goes to the cable boxes, how you turn the TV on, this is how you do all this. And she looked at me like I was crazy. And that’s just one of those moments where I’m like, yeah, why does it? Why does it work like this? Why can’t I just walk in like we can now and say, Hey, turn on, Barney, I want to watch Barney turn it on. And it shouldn’t even need the buttons shouldn’t need our effort to even react like it’s becoming the interfaces becoming more ambient. But that does require an AI type system that’s meeting us more than halfway, you know, better than Alexa better than Siri, who are these things are improving, right? We’ll get to the point where when you walk into a room, it does, like and Bill Gates, oh, my imagine you know, the artwork changes to suit you and the ambience changes. And eventually, when we have AR the whole world is going to be the way you want it. And so the again, there’s potential for negative outcomes there where our bubbles get even bubbly, or we’re just like, I’m only seeing my little world and I don’t even see your world anymore. But I do think the age of us adapting to the tech, we need to have this stuff adapting back to us and helping us. And like, like I said, the key the key sort of driver for me, the user story for people using our the brain that we put inside of places like RTI or BlueCross BlueShield, or the 58 community colleges or wherever is that it should surprise you with things you didn’t know you need. So you shouldn’t require you to be adept at search, it should bring something to you and say, hey, it looks like you’re working on this project. Here’s this thing I brought you. And by the way, it’s Oh, it’s on the other side of this firewall requires you to go talk to Tony, Tony will get you access to this thing you didn’t even you would never have known even existed, but it has this really high value number on it. Based on what I know about you, I think you need that thing, or that person or that whatever that information object resources. And that that’s that’s what I mean by the interface becoming ambient.

Andy Halko 37:15
What you’re making make what you’re saying makes me think of a joke I always say is that I think the movie Wally is probably going to be the greatest predictor of the future. Just because I imagined that AR and people not having to do any big, we’re just gonna be all floating around in our chairs constantly entertained. Exactly.

Richard Boyd 37:35
I mea n, I there’s a bunch of different competing views. But that one I still adopt this is what I wrote about in 2013. In TechCrunch, was, you know, hands more of X view, which is, you know, it’s not machines replacing us or just, you know, watching over us with loving grace. But instead, it’s us humans evolving in more potent form. So these things are already extensions of us. And as you know, someone who’s really good and adept at search today on these devices is arguably more intelligent, more capable than someone who’s saying I don’t use that stuff, right. But it’ll even be more pronounced as it becomes more ambient. And I’m just walking down the street with the glasses like this in New York, and I have my little, my little brain pal. Say, Well, you can keep walking on the street. But if you walk, you know, and you have a 24% chance of having a negative interaction with a street hustler or something, or you can walk one block over and it goes down to seven. Now I can decide not to take that advice, but it would be it could manage my day like that, if I want it to, you know, help me make all the decisions I’m going to be making every single day.

Andy Halko 38:48
What do you think about the whole Nora leg piece and us actually merging with technology?

Richard Boyd 38:57
Like I said, I think it’s really happening already. I think we’re becoming I mean, these are exa cortical extensions already. And soon, I know, Research Triangle Institute here in North Carolina, this guy named Robert Ferber, who’s implanting stuff in him now in himself, you know, to quantify his health and activity and that sort of thing. And then it’s not much of a leap from there to talk about communications between mesh networks between people with embedded things. In fact, I did work on a DARPA project back in 2010, called cognitive coupling. And it was about brain to brain interfaces. And when I was surprised how far we taken it, then I haven’t been involved with how far it’s gone since then. But I knew then it was possible for a team of people. For me if I if the three of us were dropped in somewhere, I could sense you’re ready to assemble your anxiety level. And all that and can think about ways that I could make you more effective by increasing one of those things, and it might even be chemically, you know, Tony needs a little burst of oxytocin to go engage with that villager, or Andy needs a little more testosterone and some, maybe some adrenaline because he’s got a fight ahead of him. We’re already there, or have been for some time. So, you know, how does that leak over into the commercial world? I don’t know yet. But

Andy Halko 40:27
yeah, I do usually have to get into fights during my daily hunting routine. But

Richard Boyd 40:33
you know, hopefully, hopefully, I can also help you avoid them.

Andy Halko 40:37
Exactly. So I’m curious to get into the balance now between tech and business. So obviously, we’re here as the founder show. But you’re obviously so deep in tech and the future of you, I guess, how does it work for you that balance between, you know, where there’s technology and invention and innovation, and creating business and profit and revenue? And all of that.

Richard Boyd 41:04
Yeah, I mean, there there is some business and being a futurist, but it’s mostly about speaking and writing books. To me, that’s not a business I’m interested in, I’d read I mean, I, I like it. But anytime someone calls me a futurist, I always say, No, I’m not. I’m a nowist. So I’m interested in what can be applied right now. Or, you know, it’s William Gibson says a lot of this stuff is already here. It’s just unevenly distributed. So I just wanted more evenly distribute things that I know are already working. So for example, the this idea of animated personas being applying that to selling Coca Cola and insurance products, and whatever and market research, which is a $3 billion industry. Well, we’ve been doing this, this modeling of human behavior and positions in the DOD for decades. So all I’m doing is taking that and mapping it over to a different industry. So I’m not really, it’s not really in the future. Like I said, it’s just unevenly distributed. You know, when we were when I was at Lockheed, we were modeling, you know, the entire country of Afghanistan. And we wouldn’t say that there are four kinds of people you’ve got your Taliban you got your you know, your Sudanese, you got your she is you got your whatever, we would end up with, like 5000 different independent viewpoints. And how many of each, like, if I draw ring around Helmand Province, how many different really different groups of viewpoints do I have? And how are they going to feel about me building a school for girls there, some of them are fine with it, some of them are neutral. And then these guys are all red over here. And they’re going to go and blow up the school. So I need to keep an eye on them. Right. So we’ve been doing that for a while. And that’s where I think the money is. And I think, even RPA, I believe is already commoditized. In my view, if you’re not looking across your organization and figure out everywhere, where you can do process automation, instead of put having people like doing insurance reimbursement codes or reviewing loan agreements or lease agreements. There’s that famous JP Morgan story that we were sort of tangential to many years back where they were spending at the time, 310,000 hours a year of New York lawyer time, just reviewing loan agreements. I’m like, well, let’s say it’s 800 bucks an hour. Well, that’s so that’s God, that’s almost $300 million of an expense that’s built into your p&l, it’s built into your stock price system, assume cost everybody has, well, they implemented something, got the name of it now, but within about, you know, let’s say cost $2 million in less than a year, and they never have that cost again. So think about that one time, just from a business perspective, one time investment of $2 million that removes a $300 million cost and pays that annually. It’s an annuity every year from that point on. So JP Morgan got religion on that very fast, and is doing that for everything they can find within the within the organization. Hopefully those New York lawyers are all okay. I don’t know if we need to do a GoFundMe or take care of it. All right.

Andy Halko 44:10
Well, I was gonna ask that to the next piece of that pie is when Process Automation AI impact employment rates. So I’ve always thought about that. And, you know, how do we change and adapt? And it’s so interesting, because right now, it’s such a hot job market, but I keep thinking there’s got to be a cliff somewhere.

Richard Boyd 44:31
Well, I mean, we’ve been predicting, again, smart people sitting around trying to predict stuff, we thought, hey, we’re gonna see double digit down returns, and we’re gonna see double digit unemployment that’s just gonna be miserable, really soon, and we got to get ready for it. And we I’ve been saying that since 2014. And it’s not happening. You know, the pandemic is, you know, was a blip, but it really isn’t happening. But the way to really think about it is, you know, the Industrial Revolution created the same problem for us with technology. It’s just that we had 200 years to adapt to it. So in 1800 90% of everybody in the US was a farmer, or involved in farming. And then in 1900, it was like 40%. Today, it’s too. So we found something else for people to do. But we had 200 years to make that. That adjustment. The problem is today is we’ve got like a decade, maybe, you know, I was on the Board of Trustees until recently for eight years of a community college called White tech here in North Carolina, with 70,000 students, when I looked across the programs, and I would just say, I’ll just choose one, like radiology. I’m like, Wait, so you’ve got humans, you’re training people to read X rays? No, stop it right now. We already have, again, I’m a nowist. We already have technologies that are way, way better, like two orders of magnitude better than any human being at reading an x ray, why in the hell would you go into that? Don’t go into that job. It’s not going to be a job soon. It’s not a job now. Right? And everyone’s looking at me like, Oh, you’re from the future? And I’m like, No, I’m not. It’s, I’m telling you now. It exists. Now, I just got to plug a machine in here. And you don’t have a job. So don’t invest your time in that. So the question, I was like, where? What sorts of things will we do? But we are finding things. But is it going to be disruptive? Yes.

Andy Halko 46:24
Yeah, I always think so. I’m curious about the process for you, when someone comes to you and says, we have a problem. You know, where do you start? Is it as you know, is it breaking? Part all the questions? Is it just mind mapping it? You know, how do you start when an organization, whether government, or you know, commercial comes to you and says, How do I solve this problem?

Richard Boyd 46:48
Sure. All right. Well, again, the process, the process automation bit is the easiest one, it’s like, what are you doing today? Let’s look. And we’ve often even said, maybe you need a new kind of person. In your organization called chief resource officer, that’s just looking across everything that your people are doing and trying to, oh, that’s something that we could do with the machine today. So let’s move those people to a different role, or have fewer of them. And let’s go ahead and automate that. And you could say, well, I want to keep people employed. And there’s like, but then if you’re up against someone like JP Morgan, who has that mindset, you’re going to be a dinosaur so fast. It’d be like Kodak, you know, in the photo industry, just out of business pretty soon. So so. So we always look at that first. And then the second thing I always look at is like, Okay, what about your organizational knowledge? People always looked at me like what it’s like, think about like Salesforce, but for knowledge, right? You you invest a lot in your people and stuff. And you have turnover of people. Like here in North North Carolina, for example, of the 58 community colleges, over the last three years, we have 20 new presidents. So what happens when those people leave, or faculty leave or whatever? Well, you’ve lost knowledge that you gained in the information age and probably great expense. So when the new person comes in, they have to rebuild it all over again, or what if there was a little bot there that had the model on the map of the previous president and said, Well, you know, when Dr. Scott ran his first, you know, Board of Trustees meeting here, the information objects and people he went to, and what he found useful, and you can follow that map, or you can build your own, but at least it’s there as a comfort sort of training wheels thing if you want it, and it’s not lost to the organization. Then the third one is, if you’re selling to customers, how well do you know them? And what if you could, if you’re Coca Cola, why not have a little Sims world, running all the time, where you’re testing new product ideas in there, instead of trying to a B test with polls and surveys, which we know are flawed. Instead, let’s have data built models of people, and then run your AV testing in there. And again, because it’s machines, you don’t have to run eight experiments, you can run 10 to the eighth. So those are the things I go through that progression and go like, okay, let’s, and that there’s always like, every time we do an instance, it’s like, we end up with this list of 20 things. It’s like, which let’s prioritize them now, and which ones do you want to do first?

Tony Zayas 49:20
Richard, I would love to go back for just a second to discuss the idea of mindset. Because despite the fact that the speed of everything advancing so fast, it seems to me like in order to, you know, harness the power of machine learning, it really does take a shift in perspective, to be able to know what other questions to ask patients of this type of tech technology so how do we in order to you know, gain more buy in for more businesses? How do we address that because again, I think it’s a it’s forcing people to look at things very differently. And I imagine that’s a challenge for getting gaining buy in and adoption and things along those lines. But

Richard Boyd 50:11
no, that’s very true. Like, that’s what we learned. I learned this lesson a long time ago, because I’ve always been someone one point said, I think it was actually James Cameron told me it’s like Richard you like, like inventing technologies out in the future and dragging up putting a stake in the ground and dragging yourself to it. It’s like, that’s not a good way to live. And I’m like, you know, you’re right. You know, instead, I think it’s, like I said, it’s more of that sort of nowist mentality of what works today, and how can you apply it today. But change management is, is always a big part of these implementations. So if you forget that piece, you’ve got to, you’re gonna have a difficult time, even when I talked about the 58 community colleges here, like there’s a lot of distrust, like, hey, is this thing reading my emails? Is it doing this? Or am I losing privacy? It’s like, well, we can put governance around it to convince you that’s not the case. But can it read your emails? Yes, we’re gonna prevent it by policy from doing that. But, but you have to invest in change management, because it is, like I said, incredibly powerful and incredibly intimate, and therefore needs governance around it. So So, yeah, change management is something that has to be invested in that’s its own specialized skill. How do you take people through that, like, go back to my kale example of, you don’t just abruptly, you know, I was, you know, I was a mediator. And then I became a vegetarian or I was a Republican, I became a Democrat, because that well crafted email. That’s not how it happens. That happens with this slow sort of micro adoption of ideas that build upon each other, until you finally reached this new built in mindset. And we can do it like I said, we it’s, it’s surprisingly easy to do. So Great care must be taken to make sure you are like, in our case, and the work that we’re doing or with COVID is trying to move people to understand, wherever they start with their perspective, that here’s why you should wear a mask or get vaccines or, and here’s the benefit and what we and you count, and everybody’s message path is different. And that you can’t just do like one campaign, it’s got to be all these micro campaigns around groups to move them and some of them you’ll never get them over to the other other side where they’re eating kale every day. But but you can move them to like, you know, dark green vegetables a little better for me, I should eat, you know, eat food. Not too much, mostly plants. Instead of like, I’m going vegan today, James Cameron.

Andy Halko 52:48
So what’s the future hold for your organization? What do you see happening over the next couple of years?

Richard Boyd 52:56
Yeah, I mean, I, I expect that there’s going to be like what that seen in previous technology movements, there will be some kind of consolidation in these spaces. And small companies like mine will likely become parts of other organizations. But we’re still finding our way. And we’ve tried to really the biggest problem with most small companies, is that, God there’s so many opportunities out there to apply the tech and you really have to focus in. Yeah, you think so you’ve heard me talk about knowledge, amplification, organizational knowledge, and then animated personas. And we’re not doing that much process automation, because I think it’s already commoditize. So, so I think you’re gonna see consolidation. But I think if you watch what Google’s doing with GPT3, where machines are getting really, really good at, at fooling people into believing that they’re a real person. So getting closer and closer to that Turing test solution. That least if it’s contextual, that’s going to that’s going to have some broad reaching implications, you know, about trust and about you know, but on the other side of it, on a positive side, it’s having really, really good virtual assistants around you, that are helping you all the time. They might become annoying, but you can also just silence them in their feelings don’t get hurt, but you could have little bots, and I really like what doc Searle talks about in his intention economy book, the idea that, you know, instead of me being bombarded with ads, where it’s all about monetizing human attention, which is what Facebook and LinkedIn and everybody are all about, it said we turn it the other way, where I’m putting out little mini RFPs I’ve got little bots out there always looking for, you know, this kind of vacation or this kind of Hugo Boss you know, 40 Regular jacket at a certain price, and it’s just always out there searching for me or certain kinds of things. Information, or connections or whatever these things doing my bidding instead of me having to be good at search and sitting here typing, I’m getting surprised by these artificially intelligent systems that are that are tuned to my values, my goals and are trying to help me get there. I think that’s a really positive outcome and I want to help that become a reality.

Tony Zayas 55:26
It’s a very cool concept of kind of flipping the script, you know, everything that’s currently happening and you know, kind of reversing that’s really interesting to think about. Before we ask, I think what will be our last question here, Richard, where can our viewers learn more about you? Or about TANJO? We are going to share we already sent that your medium article, your manifesto out.

Richard Boyd 55:52
Yeah, I was writing for medium for a while now at LinkedIn has become more of a prominent place to place ideas and to have congress with other people who have similar ideas. So I’m doing more and more of that. I’m, by the way, going to Reed Hoffman was an investor, my last company, the one I sold the Lockheed. So I’m like one of the first 1000 users of LinkedIn. I’m not sure I will use it as a deputy as I should. But when I think about things, and I decide it’s worth sharing, I usually put it there. So just find me on LinkedIn, on Twitter, metavrsial m e t a v e r s i a l. So metaversial, where I share ideas as well. And you know, I have a little website where I give advice to for those of you who are parents, and are also technologists. I created a site for my son who’s now nine the day he was born. And I’ve been out there collecting advice from people like James Cameron, and Nolan Bushnell and Douglas Adams, and folks like that, and putting it into that science, which is deadylan.com. Very cool.

Andy Halko 56:59
Very cool. So the question I asked everybody on the show, and definitely, you know, you’ve talked about a lot. So you may have somewhat answered this. But I want to hear from our tech founders, what do you think over the next five to 10 years is going to be the one technology that’s that’s the most society changing? You know, Is it machine learning AI? Is it augmented reality? Is it you know, language processing? What do you think is that piece that is going to make the biggest impact?

Richard Boyd 57:31
Yeah, I’m, I’m still the reason I left my overpaid job at Lockheed to do this, do another startup with all the sweat and the fear and everything is because I strongly believe that machine learning and AI, especially in areas like education are going to be incredibly transformative. You know, there’ll be a time when, you know, if let’s say, we were all together in person, I would say like, well, there’s three of us here. But I think in the next five to 10 years, there will be at least, you know, nine entities in this conversation, only three of us will be human. And each of us will have sort of augmented intelligence from systems that are listening, grabbing stuff. And so while I’m talking to you, and you know, while you’re talking to me, and I dropped, and I talked about Douglas Adams, or Alan Kay, or Herbert Simon or whatever, and you go like, Man, I don’t know who that is, your system is then bringing data up. And it may be on your screen, or it may be in your augmented reality, sort of floating field out here, where it’s augmenting the conversation. And I’ve seen good demonstrations of that already. And that’s not necessarily the Magic Leap approach, who, you know, really trying to do. And I’ve seen early demonstrations of their augmented reality. And it’s mostly focused on entertainment. And they’re really trying to solve the unstructured environment see buffering problem, which they sent this out a long time ago, but I see $2 billion, and a decade later, it’s still not working. So I don’t have good autumn for collecting that much money. But I really think this idea of just having this web of deeper information around my conversations in my world, even as I’m walking down the street, is going to be incredibly empowering and transformative. it’s a young lady’s illustrated primer from Neal Stephenson’s Diamond Age.

Tony Zayas 59:27
This has been a fascinating conversation. Richard, we’re here at the hour mark. So we’re gonna probably can go a lot longer with you. Because again, this has been fantastic. We really appreciate you taking the time to talk with us. So thank you, to our viewers. Thank you for joining. We will see you guys again next week.

Richard Boyd 59:47
I enjoyed admiring problems with you. And hopefully we’ll all make some better solutions together.

Tony Zayas 59:53
Yeah, that’s fantastic. Thank you. Thank you. Take care everybody. Cheers.