Google starts displaying ads in AI search summaries

Google starts displaying ads in AI search summaries

Google has begun displaying ads in AI-generated search summaries in response to investor concerns about the profitability of AI projects. This feature will be fully rolled out to U.S. users, displaying related product advertisements above, below and within search summaries. Although the AI summary cited content from other websites, Google said it would not share advertising revenue with these websites.

In addition to advertising, Google will also add inline links to the AI summary, and preliminary tests have shown that this design can bring more traffic to the website. In addition, Google will begin organizing search results into scrollable suggestions lists based on users ‘queries and account history, and Google Lens will also support video and voice input. The moves are aimed at improving the user experience while responding to pressure from regulators and proving that the company has no unfair advantage in the search and advertising technology markets.

Original text:https://www.bloomberg.com/news/articles/2024-10-03/google-begins-wide-rollout-of-ads-in-ai-overview-search-results

Google Lens now lets users search via video

On October 4th, if users cannot capture what they want to search for from just pictures, Google Lens will now allow users to take videos and even use voice to ask what you see. This feature will display artificial intelligence summaries and search results based on video content and user questions. The feature is launched today in the search lab on Android and iOS. Google first demonstrated the ability to use video for searching at its I/O conference in May. For example, Google, people curious about fish in the aquarium can hold their phones in front of the exhibit, open the Google Lens app, and then hold down the shutter button. Once Lens starts recording, they can ask the question: “Why are they swimming together?” Google Lens then uses the Gemini AI model to provide a response.

Original text:https://www.theverge.com/2024/10/3/24259759/google-lens-search-video-ai-launch

OpenAI just released the ChatGPT Canvas feature:

  • Designed for coding and writing scenarios
  • Canvas opens in a separate window
  • Shortcuts provided for writing scenarios: suggestions, revision length, revision writing level, etc.
  • Shortcuts for code scenarios: code review, adding logs, fixing bugs, code language conversion, etc.

Starting today, it will be launched to Plus and Team users in grayscale, and you can choose the GPT-4o with canvas model to experience it.

The canvas is built based on GPT-4o and is still in testing and can be manually selected among all models.

All ChatGPT Plus users can use it directly without waiting. In the future, OpenAI also plans to fully launch it to all free users.

Canvas not only allows you to do research with ChatGPT, but also write code, emails, etc., and most importantly, it can also help you brainstorm together. Interestingly, canvas can also add emoji.

28de11f24c8f720b4ef70169827020ff.png

ChatGPT launches a new Canvas feature that provides multiple collaboration assistance such as editing suggestions and code reviews

The new Canvas feature should be a counterpart to Claude’s Artifacts feature, which automatically triggers code collaboration and writing collaboration in certain situations.

In terms of writing, it includes functions such as recommending editing, adjusting length, changing reading levels and adjusting reading levels, from kindergarten to graduate students, polishing and adding emoticons to help users improve the quality of documents. Code functions include code review, adding logs and comments, fixing errors, and language migration to ensure code readability and efficiency.

This new feature will be available to Plus and Team users starting today.

Original text:https://openai.com/index/introducing-canvas/

Oil tubing:

Scroll to Top