\centerline{\bf SGML Soldiers On} \smallskip \noindent It is reassuring to know that the cold war still lives on in these days of glasnost. At Mark-Up '89 held in Gmunden, Austria in April we were treated to young, glamorous and very earnest women reassuring us that \sgml{} would ensure that the free world remained free thanks to the combat-readiness of the tanks, cruise missiles and destroyers and their accompanying truck loads of documentation. It is apparent that \sgml{} afficionados can be divided into two distinct groups, the theorists and the pragmatists. I have to say that it was this last group that made the conference worth attending. This latter group, could be further sub-divided into the militarists and the commercial publishers and printers. To a great extent, the DoD suppliers have to use \sgml{} and its terrible progeny CALS whether they like it or not; their main aim is to find an easy way around the problem. The now famous Taft memo stating that CALS had to be used for all DoD contracts was widely discussed. (The ultimate print-buyers revenge?). As a user of a GenCode based system (CAPS) I made it my task to search out the {\sl voluntary\/} fellow sufferers and I was particularly interested to hear their experiences. As usual, the Dutch were there in force. I generally regard them as being pretty well ahead of the game in Europe and I must say it came as a bit of a shock to me to find that many of them were still taking {\sc ascii} text and adding their own mark-up to it. In the UK there are some encouraging signs with at least one other publisher/printer (besides the OU) developing an MS Word to \sgml{} program. Elegant conversion from the authoring word processing program to \sgml{} is still very much the missing link. Avalanche Technologies have gone some of the way to solving it with FastTag which I believe is the XGML parser that is also used by DocuPro and ArborText. To my mind a `visual' system of hierarchy recognition misses out a lot of the subtleties of anything other than simple text. There certainly seems to be no shortage of `conventional' \sgml{} parsers available at a price, Sobemap's Mark-It and Yuri Rubinsky's SoftQuad being two notable examples. These both work as stand-alone word processors and would be great if you could persuade all your authors to use one. Xyvision and Interleaf both gave demonstrations of their \sgml{} front-ends. Interleaf's was very neat but again it presupposed that the author was the person on the Interleaf keyboard. Doubtless it could take in ready-made \sgml{}. In terms of major installed gencode based publishing systems Compugraphic with CAPS was noticeable by their absence. This conference was significant because for the first time people were able to report back on the results of actually using \sgml{} in competitive environments. The Oxford English Dictionary was offered up as an example of main-stream publishers using \sgml{} which makes a lot of sense when you consider updating and multi-media presentations. On the military side concern was expressed at how the number of standard DTDs was growing on a daily basis (they originally thought they would only need 3 or 4) and secondly at the amount of time it took to write a DTD. Noises were made about months rather than weeks. The other worry voiced by the militarists was that of retro-fitting \sgml{}/CALS to existing documentation. How was it to be done and who was going to pay for it? I have to confess that I found the discussions by the theoreticians on standards dry to the point of excruciating as well as a slight annoyance that they were apparently failing to recognize the amount of hard work being undertaken by users in order to breathe life into these standards. But I did come away from the meeting feeling that \sgml{} was actually being used by real people. When the conference comes to Europe again in two years time I believe that we will see that \sgml{} has progressed significantly across all EP applications. \rightline{\sl John Feltham}