AAAI Spring Symposium 2020
Towards Responsible AI in Surveillance, Media, and Security through Licensing
March 23-25
Spring Symposium Series, Stanford University, Palo Alto, California, USA
Call for PAPERs
This symposium will focus on the creation of end-user and source code licenses that developers may include with AI software to restrict its use in surveillance, media, and security. Developing technology licenses that restrict use requires consensus around how responsible use should be defined for different domains/applications, what types of clauses should be included in such a license, and how such licenses could be enforced from a legal standpoint. Thus, the symposium seeks participation from a diverse interdisciplinary group who can help formulate the challenges, risks, and specific conditions the licenses should seek to address in future iterations. The symposium will provide an invaluable opportunity to bring together experts in AI, in the legal community, and in a variety of applied domains to discuss what types of clauses would be appropriate and enforceable and then develop them.
Submissions
We will consider two types of submissions: case studies of areas of misuse (or potential misuse) (4-8 pages), or position papers (2-4 pages) addressing one of the following areas: surveillance, media, or security, or some combination. Please clearly indicate the type and area in the submission. Please also include whether your expertise is in AI, law, or one of the domains of expertise below.
Deadline: November 20, 2019
Submission link: https://easychair.org/conferences/?conf=sss20
Contact: responsibleailicenses@gmail.com
Topic areas
Surveillance - This includes both overt and covert deployment of AI models for collecting and analyzing personal data by individuals, groups, companies or government.
Media - This includes the use of AI models in the creation of synthetic text, image, video or audio data for the purposes of entertainment, advertising, propaganda or education and the algorithmic targeting of people with this content, or other non-synthetic content.
Security - This includes the use of AI models in systems used in military and humanitarian applications.
Domains of expertise
Transportation (e.g. autonomous vehicles, drones)
Employment (e.g. hiring, workplace decision making)
Healthcare (e.g. service providing and delivery, disease risk prediction, biomedical research, deanonymization)
Education (e.g. content recommendation, education delivery etc)
Law enforcement and Judiciary (e.g. surveillance, criminal risk prediction, AI applications in law)
News and Social Media (e.g. machine generated news, images, propaganda, content recommendation, “fake”news)
Satellite reconnaissance and surveying (e.g. applications for agriculture, deforestation, mining)
Military applications (e.g. autonomous or semi-autonomous weapons)
Essential service delivery and tracking (e.g. delivery of healthcare, housing, food or medicine aid)
Others not mentioned (please describe)
Format
Over its two and a half days, the symposium will feature invited talks and paper presentations, followed by breakout group sessions to explore future directions for AI licensing in these domains.
Each topic area will have participants focused on developing high-priority use cases. Participants will have access to crowd sourced ideas and they will also be free to define their own applications that need to be restricted via clauses. Legal experts will be part of the symposium, providing advice and guidance.
The primary tangible outcome would be the development of use cases to inform a set of licenses. The dissemination of the results would be through the development of domain- and application-specific licenses for use by software providers and researchers/developers grounded for protection based on contract laws.
Organizing Committee
Danish Contractor, IBM Research
Julia Haines, Google
Daniel McDuff, Microsoft Research
Brent Hecht, Northwestern University
Christopher Hines, KL Gates