Friday, September 14, 2007

Rance Cleaveland's Answers

It is a great pleasure for me to post some answers by my old mate Rance Cleaveland to the questions raised at the CONCUR workshop. Thanks to Rance for sharing his experience with us!


What an interesting workshop; I deeply regret not being able to attend.

At Luca's gentle suggestion ;-) I thought I would have a go at questions he posed at the end of his post.


What is the role/importance of real-time in modelling? Does industry want dense-time or discrete-time models?


In my experience with Reactis, my company's testing and verification tool, Reactis customers absolutely need real-time support. This is due to their applications, which are in automotive and aerospace and develop embedded control software. The most widely used commercial modeling languages (Simulink, Stateflow, SCADE) also include real-time as an intrinsic part of their semantics.

Ironically, given the sound and fury in the academic community, the industrial people I have interacted with for the most part do not care whether time is discrete or continuous. Sometimes, I have encountered customers who want to do hybrid-systems style modeling, and for these people continuity is important.


How does one involve small- and medium-size companies in collaborations with concurrency theoreticians/practitioners? Does "company size" matter?



Regarding SMEs (small- and medium-size enterprises ... a common acronym among US policymakers): I think the best way to involve them is via projects funded by third parties (governments, or a large partner). SMEs generally don't have the overheads to support "blue-sky" research, and their investment-return horizons are necessarily of shorter duration. At both Reactive Systems and Fraunhofer, our concurrency-oriented SME collaborations have either involved collaborations on government research grants or project work on behalf of larger customer. In the latter cases, it was important that we work in commercial notations (e.g. Simulink) rather than research-oriented ones.

Large companies do have resources to put into more basic research, but there is another phenomenon to be aware of: researchers in these companies often view outside researchers as competitors for their internal research funds. So collaborations with these organizations are highly dependent on the personal connections between company and non-company researchers. So-called "business unit" personnel are often the easiest to deal with, but in this case there needs to be a clear, typically short-term, pay-off to them for the collaboration.


Is there any need for stochastic and probabilistic modelling in applications? More pointedly, have you met an example that you could not model because your tool does not support stochastic or probabilistic phenomena?


We support simple probabilistic modeling in Reactis in the form of probability distributions over system inputs that we sample when creating tests. This feature, however, is almost never used by our customers. The reasons for this mostly boil down to a lack of training these engineers receive in stochastic modeling and control, which in turn is tied into the lack of good (or maybe standard) theory for stochastic differential equations.

More precisely, the engineers in automotive and aero that I've dealt with are usually mechanical or electrical engineers with backgrounds in control theory. The feedback control they use relies on plant models (i.e. "environments") being given as differential equations, which are deterministic. The plant models they devise for testing their control-system designs often have parameters that they tweak in order to test how their ideas work under different conditions.

These engineers talk in the abstract about how useful it would be to develop analytical frameworks for probabilistic plants, but tractable theories of probability spaces of differential equations are unknown, as far as I can tell.


How can we, as a community, foster the building of industrial-strength tools based on sound theories?


To have an industial following, tools have to work with languages that industry uses. For most research tools this is a problem, because the input languages are typically invented by the tool developers.

I see two possibilities. One is to work on commercial languages such as Simulink. These languages are often a catastrophe from a mathematical perspective, but they also usually contain subsets that can be nicely formalized for the purposes of giving tool support. If tools have a nice "intermediate notation" into which these cores can be translated, then this offers a pathway for potential industrial customers to experiment with the tools.

The second approach is to become involved in standardization efforts for modeling languages. UML 2.0 has benefited to some extent from concurrency theory, but there are many aspects of that language that remain informal and imprecise.


What has concurrency theory offered industry so far? What are the next steps that the concurrency community should take in order to increase the impact of its research in an industrial setting? And what are future promising application areas for concurrency research?


I think the best way to answer first question is to "trace backward" from commercial tools / modeling languages that have some basis in concurrency. Such tools would include those based on Statecharts (Stateflow, STATEMATE, BetterState); others based on Message Sequence Charts (Rational Rose, other UML tools); the French synchronous-language tools (SCADE, Esterel); tools that include model checkers (the EDA = "electronic design automation" industry); tools that use model-checking-base ideas for other analyses (Reactis, DesignVerifier).

Unfortunately the process-algebra community has had relatively little impact on commercial tool development. This is not due to shortcomings in the theory, in my opinion, but in the inattention that compositionality continues to receive in the (non-research) modeling community. In my experience, event-based modeling is also relatively uncommon, at least in auto and aero: sampling of "state variables" is the preferred modeling paradigm.

I personally would like to see work on semantically robust combinations of specification formulas (e.g. MSCs + state machines, or temporal logic + process algebra) and related tools; theoretically well-founded approaches to verifying systems that use floating-point numbers; and compositional, graphical modeling languages (lots of work done already, but still no commercial interest).

No comments: