Abstracts Track 2024


Area 1 - Methodologies and Technologies

Nr: 8
Title:

An Optimization Model for Green Single Allocation Hub Covering Problem

Authors:

Nazmi Sener

Abstract: There is a growing trend towards adopting environmentally-friendly practices in supply chain management, as regulations have been introduced to reduce the negative impact of transportation activities on the environment. A new optimization model has been proposed for a single allocation hub covering problem, which aims to determine the optimal location for a hub and the allocation of flows between origin and destination points in a logistic network. Unlike existing models, the proposed model takes into account the cost of carbon emissions, with the goal of minimizing the total amount of emissions in the network. The study effectively demonstrates the proposed model's effectiveness in minimizing carbon emissions by combining a commonly used hub covering dataset with a carbon emission dataset from the International Air Transport Association (IATA). The research found that the proposed model is more effective at reducing carbon emissions than previous models that do not account for emissions. The study also compared the proposed model to previous models that do not consider carbon emissions, highlighting the benefits of incorporating environmental concerns into supply chain management. By doing so, this study emphasizes the importance of considering environmental factors in supply chain management decisions, and the proposed model offers a promising approach for reducing the carbon footprint of transportation activities while still maintaining efficient and effective logistics networks.

Nr: 18
Title:

Financing Cost Distribution in Project Payment Scheduling: Exact Solution Procedures

Authors:

Majid Yazdani and Tarik Aouam

Abstract: This work delves into the intricacies of project payment scheduling by considering the perspectives of both contractors and clients, with a focus on the equitable distribution of project financing costs between these two parties. The concept of project financing costs encompasses the expenses related to acquiring external funds or the opportunity costs tied to the capital invested in the project. The primary objective of this study is to devise a project payment schedule that maximizes the combined revenue of both parties, all while adhering to their respective acceptance criteria. To address this complex issue, we construct an optimization model utilizing the activity-based approach, consisting of two interconnected submodels. We introduce multiple distinct yet effective solution methodologies. Through comprehensive performance assessments, our findings highlight that an accelerated Benders decomposition algorithm stands out as the most promising solution, particularly when dealing with larger-scale problems. Furthermore, our computational results undergo a thorough analysis, shedding light on the influence of key parameters and leading to several insightful conclusions. In summary, this research significantly contributes to the understanding of project payment scheduling challenges from a holistic standpoint. It presents a comprehensive optimization model and assesses various solution approaches, offering valuable insights that can guide practitioners in the field toward more efficient and equitable project payment scheduling practices.

Nr: 56
Title:

Energy Efficiency Optimization in a Two-Level Supply Chain Model: Incorporating Shortage Considerations

Authors:

Hong Nguyen Nguyen, Matthieu Godichaud and Lionel Amodeo

Abstract: As a result of the global energy crisis, energy prices have risen significantly. Governments are also implementing policies to reduce carbon emissions and promote sustainable development. Consequently, enterprises within the global supply chain are under pressure to optimize energy efficiency in their operations. In response to this trend, many authors have incorporated sustainability and energy efficiency into inventory models (Becerra et al., 2021). These efforts aim to reduce energy consumption, CO2 emissions, and overall costs for companies. Energy consumption during production, warehousing, and transportation is also identified as a major source of emissions. A two-machine inventory model has been proposed to analyze energy consumption during both production and non-production phases (Zanoni et al.). The authors provide a method to determine the optimal production rate and machine status during the non-production phase, minimizing the total cost, including energy costs. Furthermore, the impact of shortages on carbon emissions reduction has been investigated (Taleizadeh et al., 2018). A Sustainable Economic Production Quantity (SEPQ) model is proposed, considering carbon tax and different shortage policies (full backorder, partial backorder, and full lost sale). Building upon the existing literature, this study focuses on a two-level inventory system with a single product and a production line. Energy consumption is quantified for each cycle of the production line, considering both production and non-production periods. To conserve energy, the production line's state during the non-production phase is taken into account, deciding whether to remain on standby or be turned off. Additionally, shortages are incorporated into the inventory model, fully compensating for them during the production phase at the end of each cycle. A resolution procedure is developed for the Mixed-integer nonlinear programming (MINLP) problem to determine optimal decisions regarding the production line's non-production state, production rate, maximum backordering quantity from the retailer, number of shipments per batch production run, and quantity transported per delivery. These decisions aim to minimize the overall system's average total cost, including energy consumption costs. This study is also an extension of our previous work that was published in the IFAC World Congress 2023 (Nguyen et al., 2023).

Nr: 67
Title:

An Aggregation Rule Under Uncertainty Based on the Normal Distribution.

Authors:

Josep Freixas and Xavier Molinero

Abstract: Many decision-making situations require the evaluation of several agents or judges. In a situation where agents evaluate candidates, the question arises of how best to aggregate evaluations so as to compare the candidates. The aim of this work is to propose a method of aggregating the evaluations of the agents, which has outstanding properties and becomes a potential evaluative tool in many contexts. Moreover, the proposed rule is based on the most significant probabilistic distribution, the normal distribution. The proposed rule is useful even when the agents who evaluate candidates are not the same. We expose below some real-world situations that can be treated by using the proposed measure. 1. Ranking of restaurants or movies on designed websites: the set of costumers who qualify restaurants or movies is usually quite large. Thus, we need a method to rank restaurants or movies regardless of how many people have evaluated each of them. This significant example is the main motivation of this contribution. 2. Artistic sport competitions: Some sport competitions require the evaluation of athletes’ performance by a panel of agents, such as diving, artistic skating, gymnastics, rhythmic gymnastics, dancing, riding, etc. The rules used are very diverse and range from the mean (used by several federations, as for example, the International Federation for Equestrian Sports), the median (the synthesis of the scores of each sub-jury in rhythmic gymnastics is a median), trimmed means (used in various cases of diving) to some sophisticated methods. These usual methods although simple, have been shown to be quite unsatisfactory. The method proposed in this work is ideal to analyse these situations. It provides fairer rankings than the quoted alternatives. 3. Doctoral student selection problem: Some universities offer programs for gaining the Doctoral (Ph.D.) degree. The process to select students is open for students from everywhere. In most of the cases the number of applicants is much greater than the number of available scholarships, so the candidates must be ranked. The problem of selecting young promising doctoral researchers can be seen to consist of a collection of applicants for the Ph.D. program and a set of experts who grade each applicant by using a preestablished set of allowed scores. The proposed method to rank the candidates is ideal for this example. 4. Rating companies by debt agencies: Debt rating agencies (such as Standard and Poors, Fitch, Moody’s or DBR, among others) evaluate companies, they use a rating scale, which goes from the best evaluation to the worst evaluation and comprises 21 degrees. The rating scale is mainly divided into two categories: investment (from 1st to 10th) and speculative (from 11th to the 21st). Different agencies provide company rankings simultaneously. The way on how their qualifications is aggregated is fundamental for the company, for investors of the company and for potential future investors. The proposed method of aggregation is ideal for this situation. 5. Ranking of research projects by scientific committees: The main distinction with the third example is that in this example the expected number of project applicants is usually large. No matter if the number of research projects to be applied is large, the method proposed becomes a good tool to rank research projects and thus to assign financial support to those who have been better qualified by the scientific panel committee.

Nr: 68
Title:

Metrics for Simple non-Weighted Voting Games from Inequality Systems

Authors:

Xavier Molinero, Josep Freixas and Maria Albareda

Abstract: In voting theory and social choice theory, decision systems can be represented as simple games, i.e., cooperative games viewed as models of voting systems in which a single alternative, such as a bill or an amendment, is pitted against the status quo. Simple games stablish that the benefit that a coalition may have is always binary, i.e., a coalition may be winning or losing, depending on whether the players in the coalition are able to benefit themselves from the game by achieving together some goal. Many voting systems, as many real situations, can be represented as weighted voting games, one of the most relevant subclasses of simple games. We say that a simple game is a weighted voting game if there exists a real number q and we are able to assign a real number to each player in such a way that the sum of the weights of any winning coalition meets or exceeds the quota q, but the sum of the weights of any losing coalition is less than q. Both simple games and weighted voting games are closely related with many other disciplines such logic and threshold logic, circuit complexity, artificial intelligence, geometry, linear programming, Sperner theory, order theory, agent systems, social network analysis, etc. For simple games, there are many known metrics related to specific properties: a) Decisive index, where we count how many winning coalitions are with respect to all possible coalitions; b) dimension, the minimum number of weighted voting games which intersection is the given simple game; c) codimension, the minimum number of weighted voting games which union is the given simple game; d) multidimension, the minimum number of weighted voting games which intersection and union is the given simple game; e) trade robustness, counting how many players can be mixed from a winning coalition to get a losing coalition; f) invariant trade robustness, counting how many players can be mixed from a minimal winning coalition to get a losing coalition; g) alpha-robustness, how far is a simple game to be a weighted voting game based on some inequality systems. In this work, we stablish new metrics based on linear programming that evaluate how far is a simple game to be a weighted voting game: a) from integer (or real) weights depending on the minimum sum of the error of all coalitions; b) from real weights depending on the minimum maximum error for all coalitions; c) from real weights depending on a minimum global error for all coalitions; d) from real weights depending on the number of mixed coalitions. Furthermore, we give some exhaustive experimental results up to 6 players, up to isomorphism. We also analyse some metrics to stablish a classification for a simple game and to compare the behaviour of each same based on those metrics. Acknowledgements: J. Freixas and X. Molinero have been partially supported by funds from the Ministry of Science and Innovation grant PID2019-104987GB-I00 (JUVOCO) and the Catalan government [2021 SGR 01419 ALBCOM]; M. Albareda has been partially supported by funds from the Ministry of Science and Innovation grant PID2022-139219OB-I00 (MOSCA).

Nr: 81
Title:

Data-driven Learning for the Nurse Scheduling Problem

Authors:

Aymen Ben Said and Malek Mouhoub

Abstract: Solving a combinatorial optimization problem involves finding a consistent assignment of values to variables satisfying a set of constraints while optimizing some objectives. Before solving a given problem, the latter must be modeled in terms of decision variables, their domains, hard and soft constraints, and one or more objective functions. Modeling can, however, be a tedious task requiring strong expertise. In this context, modeling the Nurse Scheduling Problem (NSP) in healthcare units may be achieved actively through an expert or passively using historical data. Furthermore, passive modeling may be done manually or automatically. Manual modeling involves examining historical data and eliciting the problem’s constraints. However, this process may be tedious, given the massive amount of available data. Therefore, automatic modeling may be considered to overcome this challenge in practice (especially in case of incomplete knowledge about the constraints and preferences). Constraints can be learned implicitly by relying on machine learning methods or explicitly using constraint learners. In this paper, we propose two modeling approaches to learning the NSP; the first approach aims to learn a constraint network, represented as a Constraint Satisfaction Problem (CSP), and the second is to approximate the NSP constraints and preferences from historical data. More precisely, we tackle the following two research questions; “How to learn explicitly a CSP model for the NSP given historical data?” and ‘How can we approximate the constraints and preferences of the NSP using historical data?”.

Area 2 - Applications

Nr: 7
Title:

Maximizing Profit in Hub Location Problems with Consideration of Carbon Emission Costs

Authors:

Fethullah Gocer

Abstract: The primary aim of this study is to develop mathematical models that can help address the challenge of profit maximization for hub locations. The models are designed to consider various factors such as the location of hubs, the design of hub networks, and the routing of demand. Additionally, the models are developed taking the several allocation strategies into account such as multiple allocation, single allocation, and r-allocation. Moreover, the models factor the cost of transportation, hub installation, operating hub links, and carbon emissions to calculate profit. The inclusion of carbon emission cost is a crucial aspect of this research as it helps to account for the environmental and public health impact of emissions. The integration of these factors into the models makes them more comprehensive and effective in optimizing hub network operations. To evaluate the effectiveness of the models, two well-known data sets from the literature are used, and the resulting hub networks are analyzed under varying parameter settings. The analysis results provide insights into how the models perform under different scenarios and parameter settings. The contribution of this research lies in its potential to aid strategic planning of hub networks, leading to improved operational efficiency and reduced environmental impact. By incorporating critical factors such as transportation costs, hub installation costs, and carbon emissions into the models, the developed models can assist in the development of more sustainable and efficient hub networks, which can benefit both businesses and the environment.

Nr: 85
Title:

Optimization of Refinery Maintenance Planning Processes: A Case Study of an Oil Refinery

Authors:

Büsra Aydin, Melike K. Onat, Ocan Şahin, Barış Yıldız, Çagri Bahadir, Berker Günay and Mustafa O. Samur

Abstract: Petroleum refineries are industrial facilities with complex production processes, emphasizing the critical importance of effectively managing maintenance and repair activities to ensure unhindered progress in field processes and maintain operational continuity. A comprehensive maintenance plan is essential to extend the usable life of operational equipment such as facilities, machinery, and apparatus and to prevent any loss in operational quality. The study focuses on interdependent maintenance tasks at a petroleum refinery, where thousands of weekly maintenance works have predefined start and end dates. In our case study, these maintenance orders are grouped into four types based on their importance. The first and second groups are the more critical and must be completed before a given deadline. On the other hand, type three and type four orders are not as much critical and have soft deadlines. All orders are subject to specific resource constraints and are executed within technical units. Currently, maintenance planners carry out daily and weekly planning tasks based on their experience and expert insights. Due to the limitation of human planners to consider a large number of constraints together, they follow a simple assignment heuristic, which may result in significant underutilization of maintenance resources and a long waiting list for maintenance orders. Top address this issue, in this study we develop a scheduling system that prioritizes jobs based on resource availability, time constraints, and location of the maintenance operation. An optimization model is proposed to deal with inefficiencies that result from idleness for both the teams that operate the machinery and the critical machinery in the facility. In addition to the underutilization of the main workforce, the relocation of technicians from one working area to another one is creating further inefficiencies in the system. This study represents a significant milestone in refinery maintenance management, aspiring to enhance operational efficiency and maximize the effective utilization of resources. In the context of operations, where the planning of weekly tasks involves prioritizing orders based on constraints and resource limitations, a three-step algorithmic approach has been proposed. First, we use an unconstrained binary knapsack model to decrease the problem size (number of maintenance orders to consider for scheduling) by considering the planning horizon and number of jobs that can fit into it. In the second step, a graph-based mathematical model is utilized to efficiently schedule weekly tasks by considering order priorities, earlies and latest start times and maintenance resource constraints to maximize the total importance score of the jobs that will be completed in the planning horizon. Finally, a Time-Expanded Network Flow Algorithm is implemented to assign maintenance teams to the planned orders, which considers the relocation times of workers, minimizing the number of workers used to execute the scheduled maintenance tasks. The established methodology is thought to have significant applicability potential in similar industrial facilities.