Task Force Meeting 3

Key Takeaways

Wednesday, March 27th, 2024

Feedback from Launch on February 29th

  • Kadija Ferryman spoke about ethics and bias., Nilanjan Chatterjee spoke about research applications, and David Dowdy spoke about educational applications. There were a lot of questions about education and AI. Besides positive feedback, presenters did not receive follow up.

  • Conversations during and after the launch indicated curiosity about PHAISE. People are confused about the purpose of the Task Force, so it will be part of our task to clarify.

  • People were engaged and asking questions, and overall, it was a successful launch.

PHAISE + EPI IDEAS Seminar on March 1st

  • PHAISE partnered with EPI IDEAS to organize the panel. The discussion focused on community advisory boards in relation to AI and big data studies.

  • The panel discussed a framework that focuses on responsibility for review on Data Science/AI studies that does not fall to the IRB. If there are broader implications of research findings, there should be a broader ethical review. Stanford has something akin to this, a societal ethical review board.

  • The Task Force might consider an opportunity for an IRB protocol related to AI studies.

  • Brian Caffo said a high number of Stanford leaders are working in AI, which may account for their ability to create a societal review board, whereas at JHU, bandwidth by experts may be more limited.

PHAISE + Biostatistics: Challenges in Deep Learning on April 8th

  • On 4/8, Rama Chellappa, School of Engineering, will speak at the Biostatistics seminar co-hosted by PHAISE from 12-1pm.

  • He will speak about the relationship between AI and biostatistics. Task Force members should attend to hear how he addresses health problems with AI approaches. If you cannot attend, there will be a Zoom link and a recording will be provided.  

Future Departmental Seminars with PHAISE

  • Should be hybrid with a Zoom link.

  • Speaker Mauricio Santillana for an upcoming Epi Seminar did not work. We are revisiting further down the road to determine best option.

  • PHAISE should feature speakers with broad perspectives and varied expertise.

  • We need clarity on PHAISE budget, which is currently TBD.

  • PHAISE might co-fund Departmental seminars.

  • Recommendations from Task Force for future speakers.

    • Anthony Leung recommends AlphaFold individuals; BMB invites national speakers with limited budget.

Opportunities for Collaboration with Groups

  • The colloquium committee from DSAI Institute might be interested in partnering with PHAISE to cosponsor events with the possibility to provide some of the funding.

    • Junie is in touch with Shelly Pagano, communications for DSAI.

    • PHAISE would like to invite DSAI leaders to join the task force.

    • Would need to make sure efforts do not overlap with DSAI symposia.

  • Discussion to be continued with the Berman Institute, who has an AI + Ethics series, with goal to  cosponsor seminar .

  • HBS (Carl) recommends Center for Communications Programs (CCP), who does a lot of international work. Most people in HBS department are probably interested but don’t have the background to take next steps. They might need more of a tailored seminar on how AI can be used. The CCP has done analytics using social media. Carl is going to check with them.

  • Thomas Hartung with EHE says there’s a bit of interest. There’s a working group that met frequently. AI can be used in predictive toxicology. EHE has a rich community working on it, including Thomas.

Scheduling of Seminars

  • The Task Force does not want to take the summer off. Decision tabled for now.

  • The Biostatistics and BMB Departments take the summers off for weekly seminars.

  • Would a schoolwide seminar in July be an option?

  • We do not want to host a speaker without attendance.

AI Research Day

  • Could be student, trainee or postdoc focused, or broader. We might launch the first event and then gauge participation/ see how it goes.

  • School of Medicine has annual research retreat with Engineering, with posters on many topics.

  • DSAI Symposium (twice a year) has student presentations. Brian Caffo expressed Academic Programs may collaborate.

  • A “speed-dating” event might be useful to introduce ideas and gauge interest.

  • A keynote presentation could launch the event in an exciting and engaging way followed by local speakers

  • “How to include AI in your grant proposal” could be a useful theme.

  • How can we entice leading faculty members in AI to join Departments?

  • Practical sessions, such as breakout rooms or team-building event could be incorporated.

  • How could we encourage submissions for a poster session?

    • A couple of awards  “e.g., Best AI Research” may motivate more abstract submissions, likely in 2 categories: applied or theoretical. The definition of AI may contribute to how many submissions; the definition would be very broad to include more participants. The question is – would researchers have the bandwidth to submit and present?

    • The genetic EPI research day was very well-attended because it was their community and was part of a training program. PHAISE does not have this, so would there be as many participants?

  • Could occur beginning of the fall term (late September, early October). However, there is a good number of students completing studies graduating in May. Students would have limited data in beginning of term.

Strategic Vision or Plan

  • Strategic planning is an involved process, which can be fruitful, but time-consuming.

  • AI in public health is a rapidly changing field; it’s hard to predict what we will be working on. A strategic plan that will still be relevant a year from now may raise some questions.

  • If not a plan, are there guiding principles that are more targeted which lead this task force?

  • Emphasizing the Data Council role and taking a more pragmatic approach will prove helpful.

  • There might be a PHAISE Task Force corporate publication or something similar that frames what is unique in public health and AI.

  • There are two groups: the novices and the cutting edge.

  • We can support the data council in organizing and collecting big data sets for people who want to build models on them. How do we provide institutional access to these tools?

    • It is difficult to get a ChatGPT license through the Hopkins bureaucratic system, navigating the purchasing is cumbersome. The university requires a bill, but the price of AI depends on how much it’s used, so it’s difficult to bill.

    • Users currently are paying out of pocket and submitting reimbursements which  is not feasible to maintain.

    • JHU has an institutional agreement with AWS. There is a group headed by Ben Van Durme attempting to get GPT licensure on a University level. Some of this infrastructure will come with DSAI.

    • How can we figure out the solution so that others down the road don’t have to do this on their own?

Other Updates

  • Ahmed Hassoon attended the American Medical Informatics Association (AMIA) meeting in Boston.

    • 90% of presentations were AI; many of them covered data structures and data processes. Most institutions had adapted IRB protocols for data processing and access. JHU has not developed such protocol.

    • Many journal editors, including NEJM are struggling to find reviewers because there are not enough reviewers qualified to evaluate DS/AI studies. 

    • A list of priorities came from the conference.

      • A standard way to evaluate what kind of data was used to train AI models.

      • More discussion about alignment – how models were aligned  (to patient, outcomes, etc). More talk in alignment problem sources in lifecycle of AI models.

      • New tools to detect biases.

      • Ethics in AI.

      • The big pharma interest is trial emulation.

      • CMS – social determinants of health, there is no federal regulation to prevent insurance companies from using social determinants to raise premiums, and could be used to punish people who live in poor neighborhoods.

  • Google has developed an NLM for systematic reviews and another for grant proposals, trained on NSF grants. It generates grants and answers questions in terms of which topics are trending, and where funding trends are headed. Google is deciding if BSPH can demo the product. Unclear if database used to train the model was based on applications or awarded grants.

  • Internally, we could use a model of likelihood of successful grants. This would be helpful for Department chairs and faculty. Looking at the awarded grants is helpful, but studying those that were not awarded will help develop the model.

  • AI might be leveraged for hypothesis generation. A study has been done using the epic data model using data from hospitals by Hass and team. No journal has agreed to publish it.

  • A broadly accessible limited dataset of JHM clinical data is forthcoming.

  • The Academic Programs committee (Brian Caffo) is no longer pursuing the development of a minor in DSAI. 

  • There have been unconfirmed indications that JHU received approval for a market-level salary for AI and Data science.

Recording

PHAISE Task Force Meeting 3 Recording.mp4