Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Risk-adjusted Decision Making for Sustainable Development
Elena Rovenskaya, International Institute for Applied Systems Analysis (IIASA), Austria

“Machine Unlearning: An Enterprise Data Redaction Workflow”
David Saranchak, Concurrent Technologies Corporation (CTC), United States

Constructive Preference Learning
Roman Slowinski, Poznan University of Technology, and Polish Academy of Sciences, Poland

 

Risk-adjusted Decision Making for Sustainable Development

Elena Rovenskaya
International Institute for Applied Systems Analysis (IIASA)
Austria
 

Brief Bio
Elena Rovenskaya is the Program Director of the Advancing Systems Analysis (ASA) Program at IIASA. Her scientific interests lie in the fields of operations research, decision sciences and mathematical modeling of complex socio-environmental systems. Under the leadership of Dr. Rovenskaya, ASA Program develops, tests, and makes available new quantitative and qualitative methods to address problems arising in the policy analysis of socio-environmental systems. The team of 80+ scientists works to support decisions in the presence of ambiguity of stakeholder interests, complexity of the underlying systems, and uncertainty.


Abstract
Many models that are used to inform sustainable development policies are deterministic models, in which future parameter values are set according to some estimates or scenarios. However, the future is highly uncertain and ignoring this uncertainty when making decisions can be very costly. This talk will present the principles of a two-stage, stochastic, chance-constrained programming approach which can be used to derive policies suitable for a broad variation of uncertain parameters. It will also feature several examples of applications including pollution control and water allocation problems.
The benefits of incorporating uncertainty and missed opportunities from the lack of perfect information will be highlighted.



 

 

“Machine Unlearning: An Enterprise Data Redaction Workflow”

David Saranchak
Concurrent Technologies Corporation (CTC)
United States
 

Brief Bio
David Saranchak is a Research Fellow and the Artificial Intelligence & Machine Learning Program Lead at Concurrent Technologies Corporation. He leads research and development of emerging techniques in data analysis, machine learning assurance, and differential privacy for multimodal data applications in enterprise platforms and tactical edge environments. He serves as the President Elect of the Military Operations Research Society (MORS), an international professional analytic society focused on enhancing the quality of national security decisions. He is also an active member volunteer in the Institute for Operations Research and the Management Sciences (INFORMS), where he is a Certified Analytics Professional and an Analytics Capability Evaluation coach focused on helping organizations achieve improve performance of analytical processes. Previously he was a Lead Data Scientist with Elder Research, where he developed and applied statistical data modeling techniques for national security clients. He enjoyed meeting unique needs through creative analytic tradecraft, using static and streaming data sets. He also extended his team’s strong technical edge by developing and leading training for Elder Research’s Maryland Office that emphasized the technologies best able to meet clients' needs. Mr. Saranchak has more than a dozen years of technical civil service experience as an Applied Mathematician and Software Engineer, including assignments to the UK and Canada and long-term deployments to Iraq and Afghanistan.


Abstract
Individuals are gaining more control of their personal data through recent data privacy laws such the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). An aspect of these laws is the ability to request a business to delete your information, the so called “right to be forgotten” or “right to erasure”. These laws have serious implications for companies and organizations that train large, highly accurate deep neural networks (DNNs) using these valuable consumer data sets. The initial training process can consume significant resources and time to create an accurate solution. Thus, once a solution is achieved, updates to the model are often incremental. Also, training data can be distributed or lost, making complete retraining impossible. As such, a received redaction request poses complex technical challenges on how to comply with the law while fulfilling core business operations.
DNNs are complex functions and the relationship between a single data point, the model weights, and the model output probabilities are not fully understood. In some cases, DNNs can leak information about their training data sets in subtle ways. In one type of attack, the membership inference (MI) attack, an attacker can query the model and gain an understanding about whether the data record was used in its training, which would be a serious breach of the GDPR and the CCPA.
In this talk, we introduce a DNN model training and lifecycle maintenance process that establishes how to handle specific data redaction requests and avoid completely retraining the model in certain scenarios. Our new process includes quantifying the MI attack vulnerability of all training data points and identifying and removing those most vulnerable from the training data set. An accurate model is then achieved upon which incremental updates can be performed to redact sensitive data points. We will discuss heuristics learned through experiments that train and redact data from DNNs, including new metrics that quantify this vulnerability and how we verify this redaction.



 

 

Constructive Preference Learning

Roman Slowinski
Poznan University of Technology, and Polish Academy of Sciences
Poland
http://idss.cs.put.poznan.pl/site/108.html
 

Brief Bio
Roman Slowinski is a Professor and Founding Chair of the Laboratory of Intelligent Decision Support Systems at Poznan University of Technology, Poland. Vice President of the Polish Academy of Sciences, elected for the term 2019-2022. Member of Academia Europaea and Fellow of IEEE, IRSS, INFORMS and IFIP. In his research, he combines Operational Research and Artificial Intelligence for Decision Aiding. Recipient of the EURO Gold Medal by the European Association of Operational Research Societies (1991), and Doctor HC of Polytechnic Faculty of Mons (Belgium, 2000), University Paris Dauphine (France, 2001), and Technical University of Crete (Greece, 2008). In 2005 he received the Annual Prize of the Foundation for Polish Science - the highest scientific honor awarded in Poland. Since 1999, he is the principal editor of the European Journal of Operational Research (Elsevier), a premier journal in Operational Research.


Abstract
We present a constructive preference learning methodology, called robust ordinal regression. Its aim is to learn Decision Maker’s (DM’s) preferences in multiattribute decision aiding. It links Operational Research with Artificial Intelligence, and as such, it confirms the current trend in mutual relations between OR and AI. In decision problems involving multiattribute evaluations of alternatives, the dominance relation in the set of alternatives is the only objective information that stems from their formulation. This is the case of multiple criteria decision making, group decision or decision under uncertainty. While the dominance relation permits to eliminate many irrelevant (i.e., dominated) alternatives, it leaves many alternatives incomparable. In order to recommend a decision concordant with a value system of a single or multiple DMs (best choice, classification or ranking), the analyst must take into account preferences of the DM(s). In the presented methodology of robust ordinal regression, the DM’s preferences are exhibited through decision examples. We show how to transform the decision examples into a mathematical model of preferences in one of three forms: utility functions, binary (outranking) relations, or logical “if…, then…” decision rules. In practical decision aiding, the process of preference elicitation, preference modeling, and DM’s analysis of a recommendation, loops until the DM (or a group of DMs) accepts the recommendation or decides to change the problem setting. Such an interactive process is called constructive preference learning.



footer