#1 Is there just one solution?
#2 Everything is complicated?
#3 Existing rules have to be followed?
#4 Assessment of other players?
#5 Possible Strategies and selection?
#6 Decision-making criteria and results?
#1 No alternatives
In a complex world, there is an infinite number of possible paths and their valuation. The early narrowing of the solution space is often the result of a lack of time and resources as well as ideological, ethical or intellectual limits. Competent analyzes do not rule out anything and anticipate the consequences without prejudice.
Conclusions: What is inconceivable cannot be realized!
Complex situations are determined by a large number of mutually influencing factors. Simple, deterministic explanations and approaches usually fall short. Decisive strength consists of anticipatory investment in solution competence, robustness & resilience ahead of time with a holistic view and robust predictive models of possible futures.
Conclusion: Anticipating challenges and opportunities is art!
#3 Rules and constraints
Rules and constraints are intended to serve a holistic development and limit the solution space in favour of individuals, specific groups or society. If rules hinder meaningful developments or lead to unjustified disadvantage and/or advantages, they must be adapted or abolished. This requires proof and societal acceptance.
Conclusion: Rules are not static, but need to be developed!
#4 Game theory
Games are defined by rules, players, strategy, stake and outcome. A distinction must also be made between individual games and a sequence of games. A strategy is dominant if no better results can be achieved regardless of the reaction of other players. If a one-sided deviation does not make sense this is called a Nash equilibrium.
Conclusion: Cooperation and/or confrontation can pay off!
#5 Strategy evaluation
Based on the objectives, all relevant strategies and the resulting decisions and paths must be identified in the solution space. Furthermore, the reactions of other players, dominant strategies, balances and possible cooperations have to be considered. The optimal strategy can then be selected according to payout or regarding the selected goals.
Conclusion: Search the entire solution space for the optimal strategy!
#6 Decision making
The strategies of players and possible results and can be significantly influenced by the design of the game. With product efficiency, everyone optimizes their outcome (neoliberal - winner takes all), with Pareto efficiency (static - limited change) no losses for others are allowed, with Kaldor-Hicks losses must be compensated (sustainable - internalization).
Conclusion: The game design and rules define actions and results!
A deterministic performance model is based on the assumption of strict natural laws and certainty, in which only one condition is allowed at a time. For easier understanding, the performance can be divided into any number of (discrete) state classes (grades). The service life is defined by actual/defined failure or when a threshold is reached.
Conclusion: Any known, certain performance is called deterministic!
#2 Condition distribution – deterministic
With deterministic performance models, all similar assets or elements behave the same, i.e. all reach the same condition at the same time and fail at the same time. If the assets or elements are not the same age, failure and reinvestments are following a horizontally shifted age distribution. Instead of age utilization or loading can be used for determining service life.
Conclusion: Deterministic means all assets fail at the same age/use!
#3 Condition prediction – deterministic
Deterministic condition predictions are non-trivial, objective and valid predictions of exactly one condition at a future point in time, or of service life and time of failure. The prediction results essentially depend on available information and the forecast models used. It depends on the case and purpose if such simplifications are justified.
Conclusion: Deterministic predictions provide specific time and condition!
#4 Condition performance – stochastic
In condition performance with a variety of influences and uncertainty, different condition states are possible at the same time. Such ordered random processes can be described as stochastic discrete or continuous processes. In the Hoffmann process, a scalable performance model is integrated over any given failure distribution.
Conclusions: Performance under uncertainty can be described!
#5 Condition distribution – stochastic
In a stochastic process, the condition distribution shows the proportion of a large number of similar assets in the respective state. For any individual case, this proportion corresponds to the “a priori” probability p of being in this state at a time t. At the time of the Ø service life, the "a priori" chance to survive until then is p = 50%.
Conclusion: Stochastic means not all fail at the same time!
#6 Condition prediction – stochastic
In the stochastic condition prediction, the deterministic prediction result becomes the expected value with the probability of occurrence p = 0. The uncertainty is described taking into account the available data at the time of prediction providing a failure distribution f(tx). The service life is calculated as a bandwidth in the confidence interval.
Conclusion: Stochastic predictions provide levels of confidence!
Optimal timing - starting points
#1 Optimization - linear problems?
#2 Optimization - integer problems?
#3 Optimization - nonlinear problems?
#4 Deterministic life cycle cost optimization?
#5 Stochastic life cycle cost optimization?
#6 Solutionspace - brute force VS heuristics?
#1 Linear programming (LP)
In contrast to a ranking, optimized means that an objective function is maximized/minimized. If the functional boundary conditions RB and the objective function f(xi) are linear and continuous, the solution space is a convex polytope, with a solution in polynomial time along its edges using linear programming (simplex).
Conclusion: Even large linear problems are easy to calculate!
#2 Integer programming (IP)
If only integer solutions are allowed in the solution space delimited by boundary conditions, one speaks of an integer problem. In life cycle costing this corresponds e.g. to a division of the timeline into annual steps for treatment timing. Mixed-integer problems exist when using discrete and continuous variables at the same time.
Conclusion: With integer optimization, problems are NP-difficult!
#3 Nonlinear programming (NLP)
If the decision variables are continuous, but the boundary conditions or objective function are non-linear, the solution space can be both convex and concave. In addition, local optima can occur and programming and calculation are much more complex or, in many cases, cannot be solved for the optimum within finite computing time.
Conclusion: Nonlinear problems cannot always be solved!
#4 Optimizing deterministic LCC
In the simplest case of a deterministic lifecycle with replacement, the annuity decreases with increasing lifespan until failure. Under the boundary condition of function, the annuity with a direct replacement at the time of failure is a minimum. If the failure causes additional costs, the annuity is correspondingly higher if the time of failure is reached.
Conclusion: Full use of service life without failure cost is optimal!
#5 Optimizing stochastic LCC
Under uncertainty, the time of failure cannot be determined a priori. With failure risk and no additional information, early intervention is optimal. With additional data and stochastic prediction, optimal timing of interventions towards the unknown time of actual failure with almost minimal annual costs is possible.
Conclusion: Stochastic prediction for optimal results under risk!
#6 Brute force - heuristik
For combinatorial problems of small scope, an exhaustive search in the solution space is possible (brute force). In the continuous solution space, an exhaustive search without discretization is not possible. However, an approximation is often possible through an analytical approach and intelligent limitation/simplification of the problem.
Conclusion: Heuristics means intelligent approximation!
Economic solution - starting points
#1 Cost estimate for one or many projects?
#2 Reasonable prices and testing?
#3 Stronger or weaker sizing?
#4 Confidence intervals unit cost?
#5 What quantity at what cost?
#6 Optimization work-zone length & timing?
#1 Cost estimation
The costs of individual projects can fluctuate significantly up and down due to a variety of factors. With a larger number of comparable projects, the effect is balanced and the average price is more stable (central limit theorem). With the right strategy, better prices can be achieved in most cases.
Conclusion: Variations are possible – in general not so much!
The costs generally result from the factor prices for material, labour, equipment, transport etc. with a surplus for risks and profit. The price for the customer results from supply and demand on the market or from the level of information and willingness to pay. In the case of agreements (or cartels), the prices are systematically increased.
Conclusion: Beyond a range, increased prices are no accident!
The costs of larger dimensions generally increase degressive due to the fixed cost component. The minimization of the acquisition costs often results in a shorter lifespan and thus high annual costs. The life cycle cost analysis provides the optimal dimensioning with minimal annual costs (annuity) for each interest rate i.
Conclusion: Buying cheap is often expensive – with interest!
#4 Confidence intervals
In many cases, degressive increasing total costs are reflected in the flattening costs per unit (economies of scale). Statistically, there is a confidence interval (reliability) for the expected unit costs in individual cases (lighter, wider) and on average (darker, narrower) according to quantity as shown.
Conclusion: Confidence intervals show the reliability of estimates!
#5 Total cost function
The total costs usually do not increase linearly with the quantity of the goods, but in many cases decrease gradually (degressive) due to cheaper purchasing, greater efficiency and distribution of the fixed costs. This relationship can be shown with cost functions. Lower prices can also lead to the purchase of larger quantities (not needed).
Conclusion: Relationships between costs and quantity are systematic!
If total costs increase degressive with the quantity, a bundling of measures can be advantageous. This is the case if the savings in bundling are greater than the loss in service life. In this way, considerable savings are possible compared to currently used maintenance and rehabilitation approaches.
Conclusion: Innovative optimization method for big savings!
Applied intelligence - starting points
#1 What is intelligence?
#2 Conditional probability?
#3 Remaining service life of assets?
#4 Remaining service life by age and condition?
#5 What are neural networks?
#6 Damage detection?
#1 Definition Intelligence
As intelligence usually the (human) ability to learn from experience, adapt to new situations, understand abstract concepts and influence the environment is understood. Artificial intelligence is concerned with automating intelligent behaviour and machine learning.
Conclusion: No general definition - but absence is recognizable!
#2 Conditional probability
Conditional probabilities can be represented in a crosstab or path diagram. The probability of the occurrence of a conditional event results from the multiplication of the probability along the paths (z.B. P(𝑨 ̅∩𝑩)=1 failures p.a. → Detection rate inspection P(B)=? number unnecessary replacement?).
Conclusion: Infer to connections that are not directly observable!
#3 Remaining life surviving assets
When referring to the remaining life of the asset portfolio, usually the remaining life of surviving assets is to be calculated. While the average remaining service life of all systems decreases linearly, the remaining service life of surviving systems increases with increasing age, since already failed systems are not included.
Conclusion: Longer life and better condition than average for survivors!
#4 Condition and age-related remaining life
If the characteristic condition performance, the average service life and failure distribution are known, the condition-related remaining life can be determined. To do this, the performance curve must be scaled through the boundaries of the condition threshold resulting in a centre of gravity in an area of the failure distribution yielding tr.
Conclusions: Age/condition related remaining life is predictable!
#5 Neuronal Networks (NN)
Artificial neural networks consist of layers and neurons which exert an influence through positively and negatively weighted connections. If a threshold value is reached in the activation function, the neuron passes on information. In the training phase, the network learns by modifying the weights based on learning rules with this learned knowledge being the basis for the application.
Conclusion: Neural networks must be trained before use!
#6 Damage detection
Major fields of application in engineering include the detection of recurring patterns in images (e.g. faces, damage) and the prediction for a large number of variables. The quality characteristic of the Neural Network is the detection rate in the test phase, i.e. the proportion of correctly recognized information in practice.
Conclusion: A training set and detection rate are crucial for reliable results!
Modulare steps - starting points
#1 What does modular mean?
#2 Simplification of systems?
#3 Reliability of elements?
#4 Reliability of systems?
#5 Asset management cycle?
#6 Implementation examples?
#1 Definition modular
In systems engineering, complex tasks or systems can be divided into individual, functionally closed units. These units or modules fulfil a task in the overall system and thus enable efficient structuring or sequential or parallel processing depending on resources, schedule and priority according to the goals.
Conclusion: break down complex tasks into manageable tasks!
#2 System structure
In system theory, a system is defined as an entity that can be separated from the environment and consists of elements. The description of real complex systems requires simplifications and can take the form of block diagrams for reliability analysis, event trees or path diagrams. However, in the analysis of results, these simplifications always have to be kept in mind
Conclusion: reliability analysis of systems with block diagram!
#3 Reliability of elements
The reliability R of elements can be described according to time or loading and is directly related to the probability of failure F = 1-R and failure density f. Reliability can be determined using aging or stress tests as well as systematic records and statistical analyzes of existing assets and elements
Conclusion: keep systematic records for reliability analyzes!
#4 Reliability of systems
Depending on the type of system, reliability has a different function than that of the elements. Reliability is increased in parallel systems, since only one element has to survive, while it is lower in serial systems since the failure of one element leads to system failure. With k out of n elements, the result lies in between.
Conclusion: the sum can be worse / better than the parts!
#5 Asset Management Cycle
Asset Management is structured as a cyclic process and aims at a holistic view of all phases in the life cycle. The cycle consists of inventory and condition survey, prediction, modelling of measure impact and costs, investment optimization, budgeting, construction program, commission and benchmarking.
Conclusion: Logical cycle for continuous, systematic improvement!
#6 Application examples
If there are no good solutions, searching for "best practices" does not make sense. The presented methodological approaches in asset management and life cycle costing were developed on the basis of extensive research and application. Thus, every new project offers the opportunity to test and develop innovative approaches.
Conclusion: Prove the theory in application and prediction!