© Inria / Photo Kaksonen
Research work may lead to inventions and the development of new software which is distributed in 3 main ways:
- through a "free licence" based on one of the classic "open source" distribution licences
- through a "licence for evaluation", of limited duration, for the business's internal testing and experimentation purposes
- through a "patent licence", enabling the business to integrate the software in a product or market it.
Security / Program proofing
Vulnerability on PKCS#11 security hardware.
PKCS#11 is an RSA standard, which specifies an API for cryptographic operations, such as encryption and signature.
Attacks exploit the PKCS#11 cryptographic key management interface, TOOKAN software allows these attacks to be detected. TOOKAN uses formal methods by automatically reverse-engineering a model of the functionality of a piece of hardware: it identifies an attack within the model and then executes the attack directly on the device.
Multi-source real-time intrusion detection.
ORCHIDS is an intrusion detection and prevention tool. It detects modern complex attacks by establishing a correlation between events. ORCHIDS is effective, expandable, real-time and connects to numerous event flows and formats, networks or systems. Standing at the crossroads of detection tools using signatures and anomalies, ORCHIDS is configured by a small number of generic rules: a single ORCHIDS signature is able to detect an entire range of attacks, including certain zero-day attacks.
Factoring generic medium sized integers.
As part of the factoring of public RSA keys, the best algorithms (NFS and variants) require a subroutine to factorise any medium sized integers. TIFA resolves this problem and factorises numbers of the order of 60 to 80 decimal digits in the most optimised and competitive manner.
Cryptography using elliptical curves is the main rival to RSA keys. The principal non-immediate problem is that of determining the number of points for an elliptical curve modulo of a cryptographic sized integer. SEA is the standard algorithm for which the Inria TANC team has developed an optimum implementation. This allows curves other than those predetermined by the NIST (National Institute of Standards and Technologies) to be chosen.
Proving the primality of large integers.
Proving the primality of a given integer is one of the elementary tasks of number theory. It is difficult to generate any prime numbers within the generation of RSA keys (the opposite problem to that of factorisation) in a certified manner. ECPP allows the primality of cryptographic sized numbers to be proved and for world records to be established (new record at ECC 2010).
A prover developed for program verification.
ALT-ERGO is an automatic prover specially designed to guarantee the correctness of programs. It combines logical reasoning with specialised quantifiers and decision-making procedures for certain theories such as arithmetics and equations. It can be used from CAVEAT or FRAMA-C platforms to prove programs written in C, from the WHY toolbox for Java programs, and from SPARK for ADA programs.
Analysers of critical embedded C code.
FRAMA-C is an open software platform for analysing the source code of programs written in C. Already adopted by a number of academic and industrial laboratories, on the one hand, it allows a complete analyser to be rapidly constructed on the basis of a new algorithm and, on the other hand, it allows a solution combining several analysis techniques to be assembled.
A program verification platform.
WHY is a toolbox for proving programs. It allows you to prove, using external automatic or semi-automatic theorem proving tools, that a set of code complies with the formal anticipated behaviour specification.
Recursive program verification.
MOPED is an automatic verification tool, i.e. a tool used to guarantee the correctness of programs. MOPED is based on the theory of pushdown automata, which model programs using procedures, including recursive procedures. One area in which MOPED has been used is the verification of peripheral drivers in Windows, another is the verification of programs written in Java. For this second area, MOPED was equipped with an integrated graphical user interface in Eclipse.
Statistical tools for stochastic models of complex systems.
This tool calculates the probabilities associated with the qualitative and quantitative properties of stochastic models for complex systems. The tool runs synchronised models using a hybrid automaton, and then selects a subset of these runs and applies a statistical analysis in order to produce the final result.
Networks / Distributed databases
Multihop wireless ad-hoc communications.
OLSR is a classic internet routing optimisation protocol, which provides increased agility, in order to manage IP wireless communication directly between users who are potentially mobile, without necessarily passing through an access point. OLSR allows users of this network to self-organise routing of this IP communication between two remote users, who cannot communicate directly, via other users. OLSR is standardised in the IP protocol stack, within RFC 3626, and will soon appear with the name OLSRv2.
Extension of the OSPF protocol to wireless ad-hoc networks.
OSPF-MPR uses certain OLSR techniques to make everything work within the same network, classic wired OSPF routers and wireless ad-hoc routers, which are possibly mobile. The advantage of compatibility with the standard OSPF routing protocol is an improved quality of service for wireless communication, at the expense of a certain loss of agility compared to OLSR. OSPF-MPR is standardised in the IP protocol stack, within RFC 5449.
Multimedia streaming services on ad-hoc mesh or mobile Wifi networks.
The main difficulty for multimedia streaming on mobile Wifi networks (Mesh TV type) is managing the quality of service when links are down. MOST family protocols maintain an optimum overlap and guarantee a continuous transition when the trees change, without interrupting services or adversely affecting quality. The CMOST variant (Centralized MOST), as specifications stand, offers centralised implementation on a dedicated streaming server, which allows usage and subscriptions to be monitored.
Effective dissemination and interrogation of distributed web data.
ViP2P is a distributed symmetric network, without a central coordinator (peer-to-peer), within which peers can share data obtained via a system of continuous queries. Each peer archives the results obtained and can supply them in order to effectively respond to "instantaneous" queries. ViP2P enables information to be shared in a distributed, targeted and effective manner using extremely precise queries. The system has been tried and tested within networks of hundreds of peers and hundreds of gigabytes of data.
Optimised storage of semantic web data.
RDF data describes resources in a flexible and generic manner. Knowledge is combined with a data set to describe the properties of different types of resources and their relations. Relational databases are ill equipped to deal with this kind of data. RDFViewS greatly improves performance in terms of storing, querying and updating RDF data, by drawing the maximum benefit from the relational system's capacities, with the aid of materialized views.
Statistical modeling / Probabilities
Software for classifying multidimensional data in unsupervised or supervised mode.
The Mixmod software program can be used to handle classification problems in a Data Mining or Statistical Learning context. It is based on probability-law mixture models. It offers a wealth of models for quantitative and qualitative data, identified by efficient algorithms as well as result-interpretation tools and precise model-selection criteria. Written in C++, Mixmod is interfaced with Scilab and Matlab and distributed under the GNU license (GPL).
Extension of the Mixmod software for processing large tables.
MIXMOD-HD contains new functions specifically designed for the processing of large tables. They enable efficient processing of tables including several hundred variables, and classification of tables with several thousand columns into homogenous rectangular blocks. The types of techniques used are particularly suited to the analysis of very sparse arrays.
Unsupervised segmentation through spatialised Gaussian mixture models.
SMIXMOD is a tool for unsupervised image segmentation: It classes pixels into homogenous categories without any need for the user to specify these categories or their names. It extends the MIXMOD mixture models by allowing proportions to vary according to position. It therefore allows a spatial structure to be taken into account in the data. It has been used with success to segment hyperspectral images of old materials acquired at Synchrotron Soleil.
PK/PD modeling tool.
MONOLIX is a piece of population pharmacology modeling software. It can be used for maximum-likelihood estimation of the parameters of non-linear, mixed-effect models. MONOLIX also offers many diagnostic and model selection tools. MONOLIX is widely used in the pharmaceutical industry. Its powerful algorithms make it an essential tool for anyone studying complex models today (e.g. viral dynamics). Version 4.0 of MONOLIX will be available in September 2011 and will be distributed and maintained by the company LIXOFT.
Algorithms for optimal signal segmentation.
Many numerical series have certain characteristics that change abruptly at different times. Notable examples include medical signals (EEG, ECG), genome profiles (aCGH), financial series and seismic data series. According to the type of data studied, these changes may, for example, affect the mean, the variance or the spectrum. The mathematical and algorithmic tools developed in SEGSIG aim to segment such signals by automatically determining the points at which these sudden changes occur.
Sequential decision making tool.
The Monte-Carlo Tree Search techniques can be used to search possibility trees using random sampling methods. The MoGo application is an example of the use of the Monte-Carlo Tree Search techniques in sequential decision making (i.e. problems in which a decision must be taken at each step in time). The method employed has been used on library optimisation problems and is currently being tested on electricity generation planning problems.
Technology for problem-solving by sequential division.
DaE technology uses evolutionary algorithms to solve problems by sequential division. The problem, it is assumed, is to link (within a space to be defined) a starting point to a goal. The evolutionary computation generates intermediate points and uses an existing specialized algorithm to link successive points. The hypothesis is that optimisation will be capable of generating intermediate problems that are easier for the existing algorithm to solve than the initial problem. Following an initial demonstration of its theoretical pertinence, DaE has become the first ‘all-purpose’ scheduler (i.e. one that is capable of solving various types of scheduling problems in quasi-optimal fashion).
Tool for numerical optimization without derivatives.
The CMA-ES algorithm is a stochastic optimisation method for optimizing numerical functions without derivatives. It can be used to handle difficult, highly non-linear, poorly conditioned, non-convex, multi-modal, high-interference optimization problems, and is several orders of magnitude faster than competing methods (GA, PSO). The algorithm, implemented in various languages (Python, C, C++, Java, Matlab, Scilab), possesses an easy-to-use interface and a visualization tool that can be used to track the progress of the optimization procedure.
Tool for numerical optimization of expensive functions without derivatives.
lmm-CMA-ES is designed for the numerical optimization of expensive functions for which an evaluation can take tens of minutes. This algorithm is the younger brother of the CMA-ES algorithm. It pairs the CMA-ES method with learning of quadratic meta-models (also known as ‘surrogates’) that create local models of the objective function and are designed to reduce the number of evaluations of the real objective function, which is expensive to compute, in order to reduce the total cost of the optimization procedure.
Platform for comparison of optimization algorithms.
COCO is a platform for comparing methods for the optimization of numerical functions. COCO makes it easy to measure the performances of an optimization method over a set of test functions and compare its performances with those of other optimization algorithms. The platform can be used to automatically launch and save the data concerning the optimization of the test functions. These data are then automatically processed and a set of graphs illustrating performance levels is produced.
A software toolkit for fractal processing of signals and images.
FracLab is an integrated set of MatLab and C routines that can be used for various signal and image processing tasks using fractal and multi-fractal methods. FracLab can be applied to all data types, without any hypothesis being made about “fractality”. However, it will only yield interesting results if the signals are irregular, like, for example, the majority of bio-medical signals and images, Internet traffic trackers, stock market time series, and the recording of many physical phenomena... FracLab may be used in a variety of ways and is very simple to use thanks to a carefully-developed user interface and an integrated help tool.
Interaction / Visualization
Toolkit for zoomable user interfaces.
ZVTM is a toolkit enabling implementation of multi-scale or zoomable interfaces based on a large quantity of data and a wealth of graphical instructions. ZVTM can be used for browsing large databases, social network visualization applications and checking techniques. It offers great visual qualities while maintaining excellent performance levels. Several examples of ZVTM applications are presented on the WILD platform.
Linux windowing system
METISSE enables prototyping of original window management interaction techniques and their testing with real (unmodified) applications using a daily base. METISSE can also be used to incorporate standard applications in another application. For example, Pok3D is a 3D game written in OpenGL which uses METISSE to integrate 2D components written in GTK+ (classic 2D toolkit) for configuration, chatting, etc. METISSE has also been used with ZVTM to move windows from a laptop to a video wall.
Programming of interactions with state machines
Programming interactions involves defining the reactions of the system to events initiated by the user. The widely-used model for events scheduling is very basic and offers only the possibility of a reminder function in isolation for each type of event. In contrast, SwingStates can be used to describe the structure and context in which events must reach the system and therefore produce more readable and easier-to-maintain programs. Built on Java Swing, SwingStates makes it possible to redefine the behavior of existing ‘widgets’ or to define more innovative ones using as few lines of code as possible.
Interactive exploration of multi-dimensional data through scatter-plot matrices.
ScatterDice is a tool for exploring multidimensional data via scatter plots. A scatter-plot matrix provides an overview of the database, and the user can examine the scatter plots in which he/she is interested. Transitions between scatter plots are visualized as 3D rotations. The system can be used to make and iteratively modify queries (e.g. to search for a house in a real-estate database) by making selections and sculpting them according to different viewpoints.
Exploration of multivariate social networks for browsing scatter-plot matrices.
Social networks collected by historians or sociologists often have a large number of attributes, which makes them difficult to visualize and explore. GraphDice enables analysts to explore these networks quickly. GraphDice is based on ScatterDice technology and extends the tools offered for browsing within scatter plots by applying them to multivariate node-link diagrams.
Hybrid visualization of social networks.
NodeTrix is a hybrid representation of social networks (e.g. networks for collaboration among researchers) which combines the advantages of traditional node-link diagrams with those of adjacency matrices. The former give an idea of the overall structure of the network, while the latter can be used to analyse the communities. NodeTrix also includes techniques for building these visual representations through direct manipulation.
Multi-user analysis of social networks.
CoCoNutTrix is an experimental system for the multi-user analysis of social networks based on the NodeTrix hybrid visualization system. Several analysts can collaborate and interact with the system at the same time using four mice and two projectors. These collaborations are particularly useful when each analyst only knows part of a broad social network (e.g. researchers from different sub-communities).
A system for the interactive exploration of large family trees.
Geneaquilts is a new type of family-tree exploration system. Whereas traditional systems, based on node-link representations, quickly become unreadable, Généaquilts can be used to explore family trees involving several thousand individuals. The system is based upon an interactive, matrix representation of family trees. Fields of application include history, anthropology and phylogenetics.
A toolkit for the development of information visualization systems.
Information visualization is a rapidly growing field that has given rise to a lot of highly promising results and innovations. New information visualization techniques are, however, difficult to develop without appropriate tools. InfoVis Toolkit is a Java library that uses the knowledge and know-how acquired in the field of information visualization to help developers create high-quality information visualization applications.
Guide to rapidly accessing data from the major international Biology portals (e.g. Entrez/NCBI, SRS/EBI).
BIOGUIDE offers a query guide that will enable you to access the masses of data stored in public data sources quickly and efficiently. The user does not have to query the different sources independently and can express his/her queries in a simple way, by selecting the entities in which he/she is interested (Gene, Disease, Protein, etc.). BIOGUIDE takes into account users’ preferences with regard to the
types of sources to be queried and follows different querying strategies. BIOGUIDE draws upon all the complementarity of biological information and also makes it easier for the user to cope with any contradictory information found (contradictory expert opinions).
Guide to associating a list of genes with a biological phenomenon.
GeneValorization offers a simple interface which allows the user to rapidly identify the extent to which a set of genes is known in literature to be associated with a given biological phenomenon, e.g. a disease. The user provides a list of the names of the genes that he/she is studying and a list of keywords describing the context of the study. GeneValorization automatically and rapidly searches for existing publications that contain both the keywords and the gene names. GeneValorization provides a result matrix and associated graphs. The user may then confirm his/her biological hypotheses, or realize that the correlation identified (e.g. gene/disease) is an original one.
System for modeling and simulation of the dynamics of large molecular assemblies.
HSIM is a simulation system for biological processes, which uses a model based on probabilistic rewriting rules that mimic biochemical reactions. The system implements a virtual bacterium in which molecules (proteins, metabolites, etc.) interact according to the biological reaction rules given in the model. This entity-centered simulation enables molecular assemblies and the spatial location of various molecules to be taken into account. In addition, as it is a discrete and stochastic simulation, any presence of very small concentrations of certain reactants will be handled correctly.
Visualization of secondary structures of RNA.
VARNA is a tool devoted to the automatic drawing, visualization and annotation of the secondary structure of an RNA molecule. Programmed in Java, it is compatible with all operating systems. VARNA has been designed for easy integration in a website or database administrator. It can also be used as a free-standing program. VARNA uses four different drawing algorithms. It supports all the classic file formats for RNA structures. It enables users to annotate and modify the structures, either using the mouse or in the command line. It can export drawings in the most widely-used image formats.
Discrimination tool for biological complexes whose structure was obtained by crystallography.
DIMOVO is a tool that enables discrimination between biological complexes and simple crystallographic contacts for 3D structures of protein complexes. Accessible in server form, DIMOVO provides a reliable discrimination function for complexes whose structure was obtained by crystallography. Designed for use by experimental scientists,
it is easy to use, yields fast results, and provides a confidence interval.
Signal simulation tool for Diffusion Magnetic Resonance Imaging.
Magnetic resonance imaging of the diffusion process of water in the brain (dMRI) is an imaging technique based on in-phase coding of the Brownian motion of water molecules. Today, it remains the only in vivo method of accessing the human connectome. Recently, it was shown that dMRI can provide data on brain activity. FVforDMRI is a code for simulating the dMRI signal on the voxel scale, taking into account cell geometry, as well as in-phase coding of diffusion processes similar to
those possible on MRI systems. It is used to clarify the links between the physical properties of biological tissue and the images obtained by MRI scanners.
Inversion algorithms for multi-static microwave imaging.
A Fortran 90 code library that can be used to simulate 2D or 3D experimentation with microwave imaging in heterogeneous environments, such as living tissues. This library makes it possible, first of all, to numerically simulate multi-static microwave imaging experimentation to reconstruct the shape of inclusions buried in heterogeneous environments. The simulation involves calculating the electromagnetic field recorded by sensors following the excitation of the sensored environment with fixed-frequency waves (in resonant conditions). It also contains
linear-sampling inversion algorithms for imaging the environment based on the electromagnetic measurements. One potential application is in mammography for the detection of cancerous cells. This code can also be used to simulate radar imaging experiments and non-destructive testing of structures via electromagnetic waves.
Processing, analysis and visualization of medical data for clinicians.
MedInria is a free, multi-platform software program for analyzing, processing and visualizing medical images. Through an intuitive user interface, MedInria offers processing functions ranging from the most standard to the most advanced: in particular, visualization of 2D/3D/4D images, image registration, processing of diffusion images and tractography.
Statistical learning in Python.
Scikit Learn can be used to extract the structure of complex data (texts, images) and to classify such data using the latest techniques. It is the reference library for statistical learning in Python, with high-quality code (efficient implementations, homogenous API, carefully-prepared documentation, near-exhaustive tests). Scikit learn was developed as an open-source tool and is available under a BSD license.
Python library for accessing XNAT services.
PyXNAT uses XNAT web services to make it possible to access the database from Python scripts instead of using a Web interface. This approach was developed to enable automation of certain operations involving XNAT databases containing large volumes of data. The choice of Python means that full advantage can be taken of the numerous libraries devoted to
neuro-imaging which are available in that language.
Python library for analyzing neuro-imaging data.
NiPy can be used to perform a number of MRI image processing operations: image registration and image segmentation, standard analysis of functional MRI of brain activation, group analyses, image visualization. This library was developed in a collaborative open-source framework and is available under a BSD license.
Library for performing multivariate analysis in multi-subject functional MRI.
Canica is used to study the functional connectivity of the brain, by extracting coherent networks from functional MRI data, e.g. those acquired while the subject is at rest. A population model is used so that only reproducible components are retained. Canica produces maps indicating the main cerebral networks.
Python reference software for visualization of scientific data in 3D.
Mayavi is based on the VTK toolkit and has a simplified interface that enables users to represent scientific information in 3D by means of a few clicks or a few lines of code. Mayavi can be used as free-standing software, as a library or as a set of reusable graphical components in a bespoke application.
Rapid development of tools for the analysis and visualization of data masses in neuro-imaging.
BrainVISA is a platform dedicated to neuro-imaging research. It enables rapid development of applications by promoting the reuse of analysis and visualization components, the deployment of software in multi-platform environments, the automation of data mass analysis using database and parallel-computing infrastructures and the creation of interactive visualization scenes combining heterogeneous data. BrainVISA is mainly used in neuro-imaging research platforms, in international research projects and for the distribution of research tools.
Regular-graph function optimizer using parallel architectures.
FastPMP is a specific optimizer for regular graphs, offering real-time solutions using parallel architectures. FastPMP is based on a generic optimization system that can be used to solve graphic-pair assignment problems and that adapts according to the size of the graph and the interaction constraints between these pairs, while continuing to implement the traditional linear complexity approach. Such a mathematical concept combined with modern parallel architectures (standard graphics cards) leads to
exceptional performances given the complexity involved. It also offers the best possible compromise between speed and performance and is particularly applicable to data masses. FastPMP can be applied to data-denoising problems, segmentation, organ extraction, tumor detection, etc.
Tool for merging/registering multi-modal 2D and 3D images in real time.
RT-DROP is a new, quick and efficient tool for registering 2D and 3D images, based on new algorithms that do not require a cost function derivative and enable results to be obtained in real time. RT-DROP matches images from one or more modes by quickly and efficiently calculating a non-rigid/deformable deformation field. Using parallel architectures and combining these algorithms, it offers unequalled performance levels and is particularly well-suited to applications involving organs undergoing elastic deformation. Optimization is performed with the help of the FastPMP tool. RT-DROP is particularly suitable for the registration of images taken of the same patient at different times and with different scales and methods, and can take account of dynamic deformations of organs (dilation, respiration, etc.).
Clustering solution for large-scale population measurements.
Cluster-LP is a clustering solution for large-scale population measurements. Based on a set of populations and the dominant characteristics of the behavior of this set, Cluster-LP is capable of automatically determining the number of clusters, their centers and the labels associated with the missions of the samples observed. It is modular in terms of the metric used to measure the suitability of a sample for a cluster, it makes no assumptions about data distribution, and it adapts according to the size of
the sample. It is well-suited to applications with an unknown number of populations or for population samples with parameters of unknown origin or with non-linear and non-homogenous behavior, notably found in the medical field. Cluster-LP can handle large cohorts without any prior knowledge of the number of clusters, estimators for clinical studies, or the identification of biomarkers in imaging.
Control / Optimization
Open platform taking the form of a toolbox exclusively dedicated to optimal control.
The aim of this project is to develop an open-source toolbox for solving optimal control problems, in cooperation with industrial and academic partners. Optimal control is the optimization of dynamic systems governed by differential equations, and it has many applications in transportation, optimization of energy processes, and biology.
Technique for management of long-term supply contracts, applied especially to the field of energy resources.
CTO is a technique based on Stochastic Dual Dynamic Programming (SDDP) algorithms. Though developed for the management of long-term natural-gas contracts, it is general in its scope and may be applied to other commodities.
Language based on stochastic programming.
OCoPHyS techniques are used to optimally control systems operating under various conditions or modes. Such techniques may, for instance, be used to define optimal management for hybrid engines or vehicles with several energy sources. The methods are nevertheless fairly general and have many other applications.
Language for optimal-trajectory planning and collision avoidance.
It simulates changes in controlled dynamic systems, including any uncertainties. The formalism applied here allows for planning of optimal strategies, using criteria such as time and energy, as well as for conflict resolution and avoidance of collisions with mobile or stationary obstacles.
Matlab toolbox for analysing stability in classical and fractional delay systems.
YALTA is a tool dedicated to the input–output stability of systems with classical or fractional delays given by their transfer function. It applies to systems that are delayed (e.g. closing off in event of communication delays) or neutral (e.g. modelling of transport problems).
For fixed delays, YALTA indicates the position of high-modulus poles, as well as small-modulus unstable poles for classical systems.
Q-Track automatically calculates control laws for closed quantum systems, determining sequences of pulses that govern changes in such a system, when loss of consistency has negligible effects. The underlying algorithm tracks a spatial trajectory over time—with a given degree of precision—for probability distributions between energy levels. The number of energy levels may be chosen. If the system is obtained through truncation of one with a greater, even infinite, number of energy levels, the design of the pulse sequence ensures its efficacy for the original system as well.
Biomimetic image reconstruction
IRHD attempts to reconstruct corrupted images by simulating the procedure applied by the brain. A defining characteristic of the IRHD method is that no prior information is needed on the location or types of corrupted regions.
Another major advantage of the program is that it may be massively parallelized, permitting its use with very large images.
IRHD is based on the model put forward by Petitot, Citti, and Sarti for the geometry of human vision. The kernel of its algorithm is the solution of a three-dimensional hypoelliptic diffusion equation.
Tropical techniques for performance assessment and verification
Tropical, or max-plus, algebra allows expression of equations and inequations involving min and max laws, in addition to standard arithmetic.
Tropical algebra can be used to define equations for synchronization phenomena associated with discreet-event systems (e.g. transport networks, communication networks, and manufacturing systems). It makes it possible to locate bottlenecks and optimize resources, and it can yield concise expressions for invariants, which can be used for checking programs and time-delay systems. What tropical-algebra software is available? The Maxplus toolbox (C and Fortran) distributed with Scicoslab allows manipulation of large linear max-plus systems (e.g. flow calculation and deadlines), and it is extended by the OCaml TPLib library for quickly performing advanced operations on tropical polyhedrons (invariant calculations).
Long-horizon large-scale dynamic programming and games
What is a long horizon in dynamic programming? Dynamic programming lets players optimize their revenue when they must make a series of decisions in a hostile and uncertain environment (zero-sum games). For some problems, the planning horizon is long, and sustainable policies are sought. This leads to selecting a discount rate near zero, or optimizing an average revenue per unit of time. Doesn't the concept of a long horizon apply solely to economics? The same problems exist for optimization of e-reputation, population dynamics, and static program analysis.
What long-horizon software is available? The C PIGames library can be used to solve large-scale dynamic programming problems with one or two players, for various parameters (e.g. updating, option to stop, and average payment). Some very large-scale problems use special algorithms. For example, a project led with Orange addresses yield management and Web applications.