Deep Learning Models Predict Cardiovascular Risk Factors from Images of the Eye
A team of scientists from Google Research have created deep learning models that successfully predicted age, smoking status, blood pressure, and other factors from retinal images.
The ability to stratify patients by cardiovascular risk is essential for identifying those likely to suffer a heart attack, stroke, or other heart disease in the future. High-risk patients can then take steps to improve their cardiovascular health. Doctors typically take into account a variety of risk factors: demographics such as age, sex and ethnicity; daily behaviors like exercise, smoking status and diet; as well as results from blood pressure and cholesterol tests.
As a simple alternative to the traditional patient questionnaire and blood tests, a team of researchers from Google Research and the Stanford School of Medicine have developed deep learning models to predict cardiovascular risk factors from photographs of the back of the retina. Since these retinal fundus images are already collected for diabetic eye disease screening, this initial study suggests that deep learning could uncover additional information that could be further leveraged for preventative health. They published their results in Nature Biomedical Engineering on Feb. 19.
“If future research pans out, we do hope that the simpler technique of retinal fundus photography could also give additional information about cardiovascular risk non-invasively,” said Lily Peng, product manager of the Google Brain Team.
The authors trained their models using existing retinal fundus images from a combined 284,335 patients in the UK Biobank and EyePACS, two large medical databases that also included demographic and cardiovascular information on each individual. Next, the prediction accuracy of the models were put to the test on two new groups of patients: 12,026 from the UK Biobank and 999 from EyePACS.
The deep learning algorithms successfully predicted various cardiovascular risk factors such as age, gender, ethnicity, smoking status, and blood pressure from the retinal images alone. For instance, in the UK Biobank group, they could distinguish a smoker from a non-smoker 71 percent of the time and predict patients' systolic blood pressure within 11 mmHg on average.
The team also tested the models' ability to predict the onset of a major adverse cardiovascular event like heart attack or stroke. Although they had more limited data for these events, the algorithms could still correctly pick out a patient who had a cardiovascular event versus one who hadn't about 70 percent of the time.
“Our dataset had many images labeled with smoking status, systolic blood pressure, age, gender and other variables, but it only had a few hundred examples of cardiovascular events,” said Peng. “We look forward to developing and testing our algorithm on larger and more comprehensive datasets.”
Rohit Varma, a professor of ophthalmology at the University of Southern California who was not involved in the research, noted that the research remains in early stages, but he does see promise and novelty in the technique. He also wonders whether inserting some of the common cardiovascular risk factors into the models would further improve accuracy for the prediction of future cardiovascular events.
“The most important study to demonstrate the value of applying a deep learning system to retinal images would be to show if specific retinal features predict either a major cardiovascular event or cardiovascular mortality,” said Varma. “If non-invasive retinal imaging is able to predict such events, then such an approach would be a major step forward in early screening, detection and prevention of cardiovascular morbidity.”