Sound studio

Lights, camera, sound – AI improvements for Google Meet

Updates and new features in Google Meet that will bring welcome improvements to virtual meetings were announced this week at Google I/O.


As a “hybrid” event itself, Google I/O provided a very suitable platform to introduce upcoming features to Google Workspace, which will improve the quality of virtual meetings and facilitate productive remote collaboration. And as icing on the cake, Google was able to claim that some of these improvements are possible thanks to AI.

Three new features are intended to overcome the problems of using a suboptimal video recording kit in a suboptimal environment. Portrait Restoration improves video quality by leveraging Google’s AI technology to automatically refine and retouch your video stream and will help in situations where Google Meet is used in rooms with less than ideal lighting. It also automatically improves your video in case you have a poor Wi-Fi connection, according to Google.

Going further in solving the lighting problem, portrait lighting uses machine learning to simulate studio-quality lighting in the user’s video stream, allowing them to adjust both lighting position and brightness.

To help with sound quality De-reverberation filters out echoes on Google Meet calls and should be especially useful for video conferencing from an empty room or a space with hard surfaces.

A feature coming later this year that seems very useful is automated transcriptions Google Meet meetings to Google Workspace so users can quickly catch up on meetings they are likely to attend. It’s probably thanks to the speech-to-text AI. Another use of natural language understanding is automatic summaries. the automatic summaries already introduced in Google Docs is now extended to Spaces to provide a summary of conversations you may have missed.

Google also announced a Live Sharing SDK which allows developers to sync content across devices in real time and integrate Meet into their apps. According to the announcement, the SDK supports two key use cases:

  • Co-viewing—Synchronizes streaming app content across devices in real time and allows users to take turns sharing videos and playing the latest hits from their favorite artist. This allows users to share commands such as starting and pausing a video or selecting new content within the app.
  • co-do—Synchronizes arbitrary app content, allowing users to come together to perform an activity like playing video games or following the same workout regimen.

The Google Meet Live Sharing SDK is now in preview and you can request access through the Early Access Program.

google meet

More information

7 Ways AI Improves Google Workspace

Introducing the Google Meet Live Sharing SDK

Related Articles

Google AI recreates lost Klimt artwork

Google launches the Forms API

To be informed of new articles on I Programmer, subscribe to our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or LinkedIn.

Banner


pythondata



comments

or send your comment to: [email protected]