Google's Med-PaLM AI Model Improves Medical Diagnostics

Google AI makes a move in the medical field


Facing the impressive momentum of ChatGPT, Google has not been idle. Although its general AI products have not yet been truly opened to the public, Google's AI has quietly done a lot in some basic fields.


Google AI makes a move in the medical field  Facing the impressive momentum of ChatGPT, Google has not been idle. Although its general AI products have not yet been truly opened to the public, Google's AI has quietly done a lot in some basic fields.  This week, Google held its annual event "The Check Up" at its new office in Pier 57, New York, showcasing its latest developments in the healthcare field: the large-scale medical language model Med-PaLM, specifically designed to answer healthcare-related questions, performs at a level close to that of "expert" doctors in physical examination issues; building AI models that can simplify the acquisition and interpretation of ultrasound images to help improve the shortage of ultrasound machine operators in Africa, among other things.  Upgraded medical language models improve diagnostic accuracy.   This is Google's third year hosting the event, with this year's theme being "Helping people live healthier lives."  In recent years, Google has been exploring how to use AI to improve healthcare, including early disease detection and expanding healthcare services.  As an AI model capable of processing and analyzing large amounts of natural language text data, large language models (LLMs) have the potential to bring broader and more profound changes to the healthcare industry. For example, it can reduce the need for in-person visits, make remote consultations more efficient and accurate, help simplify medical documents and billing, reduce the workload of healthcare professionals, and improve efficiency.  Due to the specialization and specificity of the medical field, its quality requirements for language models are much higher than any general language model on the market.  Last year, shortly after the release of ChatGPT, Google teamed up with DeepMind to announce the release of Med-PaLM, a large medical language model specifically designed to answer healthcare-related questions within a narrower parameter range. Med-PaLM can not only accurately answer multiple-choice and open-ended questions but also provide reasons and evaluate its own answers. It is also the first company to receive a "passing score" (>60%) on US medical licensing questions.  At this year's event, the latest version of Med-PaLM 2 was unveiled. According to Google, Med-PaLM 2's performance on physical examination questions is close to that of "expert" doctors, with a score of 85%. Compared to the previous version, the latest version's performance has improved by 18%, far exceeding other similar AI models.  "While this is an exciting progress, there is still much work to be done... We have found significant gaps in answering medical questions and meeting our product excellence standards. We look forward to working with researchers and the global medical community to narrow these gaps and understand how this technology can help improve healthcare services," Google said.  In addition, Google is also advancing the development of AI-assisted ultrasound imaging technology to help narrow the healthcare gap between underdeveloped and developed areas. For many resource-poor areas, although ultrasound equipment has become more accessible, experts with years of experience are still needed to perform examinations and interpret images, which these areas do not have.  For example, in sub-Saharan Africa, maternal mortality rates are high, and there is a lack of trained workers who can operate ultrasound machines. To change this situation, Google has partnered with the Kenyan non-profit organization Jacaranda Health to build an AI model to simplify the acquisition and interpretation of ultrasound images and help identify important information, including the gestational age of expectant mothers and early detection of breast cancer.  Google's AI healthcare projects also include creating a better tuberculosis screening method and an ultrasound that can detect breast cancer, but these projects have not yet been determined when they can be available to users.  Open source projects help developers build better healthcare software.  Digital mobile healthcare applications help reduce barriers to equitable healthcare. In areas where resources are scarce, they are becoming an important tool for healthcare workers to provide better healthcare services to patients. However, for developers, building tools for sharing health information across systems and running them effectively in areas with limited internet connectivity is costly and difficult.  At this event, Google launched the open source project Open Health Stack, which provides developers with the necessary building blocks to efficiently create digital health solutions that comply with World Health Organization recommendations.  The project is centered around the Fast Healthcare Interoperability Resources (FHIR) standard, making it easier for developers to access information and for healthcare workers to access that information.  According to reports, FHIR has been adopted by many major electronic health record (EHR) providers. "We hope that this set of components can drive the development of solutions and support healthcare workers in providing data-driven, patient-centered care around the world," Google said.  Some healthcare organizations in sub-Saharan Africa, India, and Southeast Asia have already begun using Open Health Stack solutions. For example, Intellisoft Consulting, a health information technology developer in Kenya, is building a maternal health application called Mamas Hub to serve community health volunteers and rural communities.  In addition to these new developer tools, Google also announced a new search feature for US users to help them find health centers that offer free or low-cost healthcare.

{getToc} $title={Table of Contents}

This week, Google held its annual event "The Check Up" at its new office in Pier 57, New York, showcasing its latest developments in the healthcare field: the large-scale medical language model Med-PaLM, specifically designed to answer healthcare-related questions, performs at a level close to that of "expert" doctors in physical examination issues; building AI models that can simplify the acquisition and interpretation of ultrasound images to help improve the shortage of ultrasound machine operators in Africa, among other things.

Upgraded medical language models improve diagnostic accuracy. 


This is Google's third year hosting the event, with this year's theme being "Helping people live healthier lives."

In recent years, Google has been exploring how to use AI to improve healthcare, including early disease detection and expanding healthcare services.

As an AI model capable of processing and analyzing large amounts of natural language text data, large language models (LLMs) have the potential to bring broader and more profound changes to the healthcare industry. For example, it can reduce the need for in-person visits, make remote consultations more efficient and accurate, help simplify medical documents and billing, reduce the workload of healthcare professionals, and improve efficiency.

Due to the specialization and specificity of the medical field, its quality requirements for language models are much higher than any general language model on the market.

Last year, shortly after the release of ChatGPT, Google teamed up with DeepMind to announce the release of Med-PaLM, a large medical language model specifically designed to answer healthcare-related questions within a narrower parameter range. Med-PaLM can not only accurately answer multiple-choice and open-ended questions but also provide reasons and evaluate its own answers. It is also the first company to receive a "passing score" (>60%) on US medical licensing questions.

At this year's event, the latest version of Med-PaLM 2 was unveiled. According to Google, Med-PaLM 2's performance on physical examination questions is close to that of "expert" doctors, with a score of 85%. Compared to the previous version, the latest version's performance has improved by 18%, far exceeding other similar AI models.

"While this is an exciting progress, there is still much work to be done... We have found significant gaps in answering medical questions and meeting our product excellence standards. We look forward to working with researchers and the global medical community to narrow these gaps and understand how this technology can help improve healthcare services," Google said.

In addition, Google is also advancing the development of AI-assisted ultrasound imaging technology to help narrow the healthcare gap between underdeveloped and developed areas. For many resource-poor areas, although ultrasound equipment has become more accessible, experts with years of experience are still needed to perform examinations and interpret images, which these areas do not have.

For example, in sub-Saharan Africa, maternal mortality rates are high, and there is a lack of trained workers who can operate ultrasound machines. To change this situation, Google has partnered with the Kenyan non-profit organization Jacaranda Health to build an AI model to simplify the acquisition and interpretation of ultrasound images and help identify important information, including the gestational age of expectant mothers and early detection of breast cancer.

Google's AI healthcare projects also include creating a better tuberculosis screening method and an ultrasound that can detect breast cancer, but these projects have not yet been determined when they can be available to users.

Open source projects help developers build better healthcare software.


Digital mobile healthcare applications help reduce barriers to equitable healthcare. In areas where resources are scarce, they are becoming an important tool for healthcare workers to provide better healthcare services to patients. However, for developers, building tools for sharing health information across systems and running them effectively in areas with limited internet connectivity is costly and difficult.

At this event, Google launched the open source project Open Health Stack, which provides developers with the necessary building blocks to efficiently create digital health solutions that comply with World Health Organization recommendations.

The project is centered around the Fast Healthcare Interoperability Resources (FHIR) standard, making it easier for developers to access information and for healthcare workers to access that information.

According to reports, FHIR has been adopted by many major electronic health record (EHR) providers. "We hope that this set of components can drive the development of solutions and support healthcare workers in providing data-driven, patient-centered care around the world," Google said.

Some healthcare organizations in sub-Saharan Africa, India, and Southeast Asia have already begun using Open Health Stack solutions. For example, Intellisoft Consulting, a health information technology developer in Kenya, is building a maternal health application called Mamas Hub to serve community health volunteers and rural communities.

In addition to these new developer tools, Google also announced a new search feature for US users to help them find health centers that offer free or low-cost healthcare.
Other Articles

Post a Comment

Please Select Embedded Mode To Show The Comment System.*

Previous Post Next Post

Contact Form