At its Search On event today, Google introduced a slew of new features, its strongest effort yet to get people to do more than just type a few words into a search box. By taking advantage of its new Multitask Unified Model (MUM) machine learning technology in smaller ways, the company hopes to start a better cycle: it will provide more detail and context-rich answers, and in turn it expects users to learn more. Detailed and reference. Rich question. The company expects the end result to be a rich and in-depth search experience.
Google SVP Prabhakar Raghavan oversees search, along with assistants, ads and other products. He likes to say – and reiterated in an interview last Sunday – that “search is not a solved problem.” That may be true, but the problems he and his team are trying to solve now have less to do with the web and more than adding context to what they find out there.
AI will help Google find out the questions people are asking
For its part, Google is about to start flexing its ability to use machine learning to recognize constellations of related topics and present them to you in a systematic way. A redesign coming to Google Search will start showing a “things to know” box that will direct you to various subtopics. When part of a video is relevant to the general topic—even if it’s not the entire video—it’ll send you there. Shopping results will start showing available inventory in nearby stores and even the different styles of clothing associated with your search.
For your part, Google is offering — although perhaps “Ask” is a better word — new methods of search that go beyond text boxes. It is making an aggressive effort to bring its image recognition software Google Lens to more places. It will be built into the Google app on iOS and the Chrome web browser on desktop. And with MUM, Google is expecting users to do more than just identify flowers or landmarks, but instead directly use Lens to ask questions and make purchases.
“I think this is a cycle that will continue to grow,” Raghavan says. Will be.”
Google Lens will let users search using images and refine their query with text. Image: Google
Those two sides of the search equation are meant to start the next phase of Google Search, one where its machine learning algorithms become more prominent in the process by directly organizing and presenting information. In this, recent advances in AI language processing will greatly aid Google’s efforts. Thanks to systems known as Large Language Models (MUM is one of these), machine learning has become much better at mapping the relationships between words and subjects. The company is leveraging these skills to make searches not only more accurate, but more exploratory and, it hopes, more helpful.
An example from Google is instructable. You might not know at first what your bicycle parts are called, but if something is broken you need to find it. Google Lens can visually identify the derailleur (the gear changing part that hangs near the rear wheel) and instead of just giving you a discrete piece of information, it can directly ask you questions about how to fix that thing. allows for. Will lead you to the information (in this case, the excellent Berm Peak YouTube channel).
Multidimensional search requires completely new input from users
The push to get more users to open Google Lens more often is appealing on its own merits, but the bigger picture is about Google’s effort to gather more context for your questions (so to speak). More complex, multiple searches that combine text and images, says Raghavan, “demand an entirely different level of relevance that we have with a single provider, and so it helps us to keep as much context as possible.” helps.” Is.”
We are far from the so-called “ten blue links” of search results that Google provides. It is showing long information box, image result and direct answer. Today’s announcements are another step, one where the information Google provides is not just a ranking of relevant information but a distillation of what its machines understand by scraping the web.
In some cases – like with purchases – distilling that down means you’ll send more page views to Google. As with Lens, it’s important to keep an eye on that trend: Google searches get you to Google’s own products faster. But there is also a big danger here. The fact that Google is telling you more things always adds to the burden: speaking with less bias.
By this I mean bias in two different senses. The first is technical: the machine learning model Google wants to use to improve search has well-documented problems with racial and gender biases. They are trained by reading large chunks of the web, and as a result, adopt dirty ways of talking. Google’s troubles with its AI ethics team are also well documented at this point – it fired two leading researchers after publishing a paper on the topic. As Google’s search VP Pandu Nayak told The Verge’s James Vincent in his article on today’s MUM announcements, Google knows there are biases in all language models, but the company believes it’s “making people aware”. Raising awareness”. put it out for direct consumption”.
Be that as it may (and to be clear, it may not), this leads to a more consequential question and another kind of bias. As soon as Google starts telling you directly about its synthesis of information, from what point of view is it speaking? As journalists, we often talk about how the so-called “from anywhere” approach is an inadequate way of presenting our reporting. What is Google’s view? This is an issue the company has faced in the past, sometimes referred to as the “one right answer” problem. When Google tries to give people short, definitive answers using automated systems, it often spreads false information.
Raghavan answered that question by pointing out the complexity of the modern language model. “Almost all language models, if you look at them, are embeddings in the space of higher dimensions. There are parts of these spaces that are more authoritative, parts that are less authoritative. We use those things mechanically. very easily. Assessment,” he explains. The challenge, says Raghavan, is how to introduce some of that complexity without overwhelming the user.
If Google is responding directly to users, can Google remain neutral?
But I do get the sense that the real answer is that, at least, for now, Google is doing what it takes to avoid facing the question of its search engine approach by avoiding domains where it’s supposed to. It can be done, as Raghavan says. , “Excessive Editorialization.” Often when speaking to Google executives about these problems of favoritism and trust, they tend to focus on the easy-to-define parts of higher-dimension spaces, such as “authority.”
For example, Google’s new “things to know” box won’t appear when someone searches for things that Google has identified as “particularly harmful/sensitive,” although a spokesperson says that Google may or may not allow “specific curated categories, but our systems are able to comprehensively understand the topics that should or should not trigger these types of features.”
Google Search, its inputs, outputs, algorithms and language models have all become almost unimaginably complex. When Google tells us it’s now capable of deciphering the content of a video, we assume it has the computing chops to pull it off – but the reality is that even indexing such a large corpus is an important task. Which dwarfs the original mission of sequencing the early Webb. (Google is only indexing audio transcripts of a subset of YouTube, for the record, though with MUM it aims to do visual indexing and other video platforms in the future).
Often the problem of traveling salesmen comes up when you are talking to computer scientists. It’s a famous puzzle where you try to calculate the shortest possible route between a certain number of cities, but it’s also a rich metaphor for thinking about how computers plot them.
“If you had given me all the machines in the world, I could have solved even bigger cases,” says Raghavan. And throwing away the computer. Instead, Google needs to come up with new approaches, like MUM, that can better take advantage of the resources Google can realistically create. “Even if you gave me all the machines out there, I would still be bound by human curiosity and experience.”
Google’s new ways of interpreting information are impressive, but the challenge is what it will do with the information and how it presents it. The funny thing about the traveling salesman’s problem is that no one stops and asks what exactly is the matter, what is he going to show all his customers door to door?