Alex Danco says the valley has a habit of picking a thing and make it abundant. We’ve done it with information (Google), entertianment (Youtube, Netflix, Reddit), products (Amazon), etc. Some say we’re on the path of making intelligence abundant. While I don’t disagree, I do not agree. I do not deny that it’s happening; I just think it won’t take the route that we think it will.
Our path to abundance of intelligence is not going to be through one that is embedded in agents that look or act like us. They’re not going to be agents as we as we know them. No robots walking around with arms and legs and everything in between. They’re going to be in the form of small units of reasoning that take place in the fraction of a second, a process known as inference; that is, roughly tens of billions* of tiny little inferences that shape our world and what we experience; they determine the content we see, the people we’re exposed to and sometimes even the lens through which we see the world.
At Google, I work with incredibly smart people who have designed systems that carry out many millions inferences per second; and that’s just at Google; I’m sure collectively between all services across the internet, that estimate is at least 3 orders of magnitude short. I think one reason I really enjoy the work I do and like the people around me is that they mostly appreciate how profound abundance of intelligence is, and what a large amount of responsibility it will require all of us to make it a net positive technology. Many I’ve met are people who genuinely care about producing a safe, fair and generally useful technology. Internally, just within Research and Machine Intelligence, there are many teams whose sole purpose in the organization is the safe, fair and/or ethical use of modeling, machine learning and “AI.” These teams are filled with some of the best and brightest engineers and scientists around and empowered (a.k.a motivated and enabled) to make sure all teams around Google have access to education, tooling and incentives to build fair, safe and explainable machine learning.
But aside from the big giants, I don’t think too many other companies can afford to fund entire teams to ensure these properties about the products and services their teams produce. In parallel, I and many others are doing their best to make machine learning tools “democratized,” as in made accessible to everyone who wants to use it. The internet made information abundant and accessible to everyone. Machine learning will do the same for inference. As long as the data exists for an inference to be made, it will. Not directly of course; just as the internet gave birth to Youtube and Facebook, machine learning will give rise to products and platforms, and ecosystems that change our lives as we know them. My question is, will company, startup and side-project have the ability and money to fund AI safety teams?
We as a community have to understand how to help the AI safety and fairness efforts and force it to grow as fast as (if not faster than) the growth of the availability of machine learning to the general public.