Skip to end of banner
Go to start of banner

Task Force Meeting 6

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current Restore this Version View Page History

Version 1 Current »

Key Points

AI Research Day:

  • Ziad Obermeyer agreed to keynote, and the date was moved to November 13th.

  • Event schedule includes a keynote, award winners' presentations, and a poster session.

  • Final details on abstracts, posters, and review committees will be discussed in early September.

Upcoming Sip and Solve Event

  • Co-hosting with the DSAI group, likely in early October.

  • Looking for speakers, with two options: speakers from public health or a mix from medicine, nursing, and public health.

  • General preference for including multiple schools to enhance representation and discussion.

Research Integrity Seminar Series

  • Series to meet training requirements, open to faculty, postdocs, and others.

  • Possible topic on ethics of incorporating AI in research.

  • Consideration of internal speakers versus national/international experts.

AI Detection Programs:

  • Discussion on the efficacy and reliability of AI detection programs like CopyLeaks.

  • Concerns about false positives and false negatives in AI detection.

Ethical Concerns with AI Tools

  • Alyssa Columbus emphasized the need for presenting both the benefits and the risks of AI tools, particularly in their potential to falsely accuse someone of plagiarism.

  • Gregory Kirk noted that responsible conduct of research seminars often don't prescribe exact actions but encourage discussions on the use and interpretation of these tools.

  • Ahmed Hassoon shared his experience with various AI detection tools, highlighting their inconsistencies. Some tools falsely flag human-written content as AI-generated, while others fail to detect AI-generated text.

  • He also mentioned a test where AI was instructed to evade detection, illustrating the tools' limitations and the potential for AI to mimic human writing convincingly.

  • Discussion on tools and methods to make AI-generated content appear more human-like. Ahmed described techniques like providing writing samples to AI and specific prompts to ensure the output is indistinguishable from human writing.

  • Both Ahmed and Alyssa noted instances of AI-generated content being published in peer-reviewed journals, raising concerns about the integrity of academic publications.

Departmental Seminar Series

  • Gregory Kirk mentioned efforts to be more proactive in organizing departmental seminars, with a focus o topics and speakers relevant to public health.

Pilot Projects Using AI

  • Plans to develop pilot projects using internally housed AI models for grant pre-review systems and qualitative analysis. These projects aim to demonstrate how AI can be leveraged for public health-related research.

AI Review Committees

  • Ahmed Hassoon discussed the establishment of AI sub councils at the School of Medicine to review AI tools, focusing on clinical, operational, and radiology applications. These committees aim to prioritize AI projects, assess risks, and provide recommendations to the IRB.

AI Governance Approach

  • Gregory Kirk highlighted that the Task Force’s role is not regulatory like the Data Trust. Instead, he advocates for providing best practices guidance to individual faculty and research teams, rather than establishing a strict governance structure.

Upcoming Events and Initiatives

  • Gregory Kirk mentioned a busy fall schedule with research day in November and several seminars. They are also working on pilot projects to secure funding.

Educational Course for Congressional Staffers

  • Brian Caffo described a successful AI educational course for junior Congressional staffers. Organized with Chris Austin from the University's Federal Strategy, about 50 staffers attended the course held in person.

  • The course covered AI ethics, language models, AI in defense, medical applications, and more. Notable speakers included Liz Chin on AI ethics, Mark Dredsey on language models, and Rama Chalapa on AI perception.

  • The course was strategically planned for August, Friday morning, between Congressional sessions, to ensure higher attendance from staffers who typically have downtime then.

  • There is potential for similar events focused on health policy and other topics. The successful model from the AI course can be adapted for rapid response to emerging issues, offering timely education to Congressional staffers.

The meeting concluded with a call for additional topics of interest and appreciation for the participants' time.

Recording

PHAISE Task Force Meeting 6 Recording.mp4