Laureano Escudero Bueno Catedrático Universidad. Estadística e Investigación Operativa y Didáctica de la Matemática Universidad Rey Juan Carlos |
On SFR3, a constructive matheuristic decomposition algorithm. Pilot case: Stochastic multiple allocation capacitated hub network location expansion planning under uncertainty
AbstractReal-life optimization problems frequently require strong MILP modeling for large-scale instances. They are hard to solve with the additional difficulty of considering capacity expansion planning along a time horizon on the networks or systems subject of the application. Examples of such problems are in many industrial sectors as energy and petrochemical networks, transportation (aircraft fleet and rapid transit network designs), supply and production chains management, flow distribution through hub networks, forestry harvesting planning, and natural disaster relief preparedness resource allocation, to name a few. The uncertainty of the main parameters is inherent to those types of problems, to be frequently represented by a multistage strategic tree coordinated with two-stage operational trees rooted at the nodes of the strategic tree. The resulting huge MILP model, although usually well-structured, needs a decomposition algorithm for problem solving. However, many of those types of algorithms require to solve up to optimality, at each iteration, submodels with a high number of consecutive nodes in the trees (say, those based on Lagrangean decomposition, and others). However, they cannot be considered, in general, for stochastic CEP problems, due to the high dimensions of those submodels. Anyway, given the intrinsic problem's difficulty and the huge instances' dimensions (due to the network size of realistic instances as well as the cardinality of the strategic scenario tree and operational ones), it is unrealistic to seek an optimal solution. So, a constructive matheuristic algorithm is introduced for providing a (hopefully, good) solution with a guaranteed quality. It is so-named SFR3, standing for Scenario variables Fixing and constraints and binary variables' integrality iteratively Randomized Relaxation Reduction, where several strategies are considered. Its performance is computationally assessed by considering, as a pilot case, the multistage stochastic multiple allocation capacitated hub network location expansion planning. Three categories of instances are used for stochastic-based perturbed well-known CAB data.
Leandro Pardo Llorente Catedrático Universidad. Departamento de Estadística e Investigación Operativa Universidad Complutense de Madrid |
Métodos adaptativos de selección de variables en modelos de regresión lineal para datos de alta dimensión con funciones de pérdida basadas en la divergencia de densidad de potencia
AbstractSe considera simultáneamente el problema de selección de variables y estimación de los parámetros de un modelo lineal de regresión múltiple con datos de alta dimensión (el número de parámetros a estimar es de un orden mucho mayor que el número de observaciones). El método de regularización LASSO-adaptivo, introducido en la literatura para subsanar las deficiencias del método LASSO en relación a la consistencia como método de selección de variables, se basa en la minimización de una pérdida cuadrática sujeta a una penalización adaptativa que induce en los correspondientes estimadores, LASSO-adaptivos, la propiedad Oráculo de selección de variables junto con una computación más asequible. La pérdida cuadrática da lugar a estimadores con serios problemas de robustez ante datos contaminados que con frecuencia se presentan en datos de alta dimensión.
En esta comunicación se sustituye la pérdida cuadrática por una pérdida basada en la divergencia de potencia. La función de penalización que se utiliza es la clásica penalización adaptativa. Se supone una familia de distribuciones muy general para los errores del modelo lineal de regresión múltiple y que contiene como caso particular a la distribución normal. Se estudia la robustez de los estimadores introducidos a través de la obtención de la función de influencia, se comprueba que verifican la propiedad oráculo de selección de variables y se obtiene su distribución asintótica bajo condiciones fácilmente verificables. Se particularizan los resultados al caso de errores normales y se lleva a cabo un extenso estudio de simulación con el fin de evaluar convenientemente los resultados obtenidos.
Wenceslao González Manteiga Catedrático Universidad. Departamento de Estatística, Análise Matemática e Optimización Universidad de Santiago de Compostela |
Goodness-of-fit tests for statistiscal models with some recent results
AbstractThe term Goodness–of–Fit (GoF) was introduced by Pearson at the beginning of the last century and refers to statistical tests which check how a distribution fits to a data set in a omnibus way. Since then, many papers were devoted to the χ2 test, the Kolmogorov-Smirnov test and other related methods. The pilot function used for testing was mainly the empirical distribution function. In the last thirty years, there has been an explosion of works that extended the GoF ideas to other types of functions: density function, regression function, hazard rate function, intensity function, . . . with more general models: with directional data, functional data, incomplete data, etc.
In this talk, we will give a modern review approach for the GoF theory, illustrating applications in topics of interest and showing some advances with recent results