- Background: realtime backup between Berlin and Frankfurt (1984/85)
- Verticale compression in self-contained units/matrices (1985)
- Sensational compression efficiency – extreme performance (1985)
- limes datentechnik gmbh develops FLAM (1985)
- Full downwards compatibility down to first prototype (since 1985)
- First customer corporation keeps extending usage (since January 1986)
- FLAM universal version with access methods for data records (since 1986)
- FLAM for 60 platforms: integrability into standard procedures (since 1986)
- FLAM with exits for individual user modules (since 1990)
- FLAM-sub: FLAM as subsystem resp. I/O-driver under z/OS (since 1992)
- Most successful use in VSAM-applications (since 1992)
- FLAMFILE als a member in a FLAMFILE / FLAM archive (since 1993)
- Backup restart through self contained units in FLAM (since 1993)
- ADC: highly efficient dynamic compression (since 1999)
- Searching – without decompressing, e.g. for SEPA archives (since 1999)
- Parallel and serial splitting for optimized data exchange (since 2000)
- AES integration into all functions of FLAM (since 2004)
- FKME / FKMS: Key Management Extension / System (since 2005)
- Encryption by co-processors, e.g. CPACF (since 2007)
- FLAM-sub meets PCIDSS specs regarding. end-to-end security (since 2009)
- Logical elements, network capability, components and open standards (since 2014)
For the West Berlin branches of a major German bank a realtime-backup network had to be set up between Berlin and Frankfurt. The huge amount of data and files and the line speeds then available made this a difficult task. Thus came up the idea to compress the data before transmitting. Those days' state-of-the-art methods were prohibitively expensive in resources (CPU, memory, etc.) and in their overall impact (paging, elapsed time, slowdown of other tasks, etc.).
An application was filed for a patent on a technique of (losslessly) compressing statically structured data by runlength-compression of columns (bytes) applied to self-contained units/matrices that have been filled up with data (records) of different formats (fixed/variable lengths). Up to that time such an approach to data compression had neither been applied in commercial IT nor in technical processes. The method has also proved useful in sciences, if data are well-structured and contain sufficient vertical redundancies like measurement data, data in food industries (backtracking data), etc.
Tests showed that sizes of payment data typical for the time could be cut by up to 95 percent and those of industrial parts lists by 98.5 percent. Strongly scattered data of (binary) measurement values (even in floating-point representation with a mantissa) could still be reduced by 65 percent (e.g. Weather Service).
The method allows an extraordinarily performant software implementation for that sort of data, particularly in the mainframe version coded in assembler.
In 1985 a business registry was filed with the Bad Homburg registry office. The business purpose was to implement the patent to a marketable software product, the first version aiming at mainframe users (IBM, Siemens).
The software product was named FLAM: Frankenstein-Limes-Access-Method. A prototype was built for presentation purposes: FLAM version V1.0 for MVS and BS2000.
With that, data could now be "flambéed / de-flambéed".
The bank was operating more than 40 computer centers wit different platforms and database systems. For the night-time syncing of the changes it needed urgently a compression tool.
This original agreement still exists while the scope of application within the enterprise is still increasing.
A universal version was released that could be ported to all systems based on structured records (measurement, finance, and adminstrative data, printing, parts, and address lists, job logs, etc.).
To enable its use as an access method for sequential and index-sequential files (such as ISAM, VSAM) an API was defined that could be integrated into applications. Starting from this version FLAM could be used as a subroutine or as an alternative to standard system-I/O.
During the years from 1986 up to now FLAM was compatibly implemented for 60 platforms. That provided the basis for FLAM becoming standard in various data exchange procedures (e.g. bank clearing).
In that context, innumerable problems had to be solved to ensure compatibility of characters, fields, formats, etc. and to provide a FLAMFILE format adequate for the resp. transfer product/procedure. For payment transactions a compression format was included that encoded in printable characters and was insensitive to code conversions during file transfer (ASCII<=>EBCDIC).
On top of the many platforms needed for the standard in payment transactions, additional implementations were required for telecommunications. Message switching centers (MSCs) of global cellular networks had to back up and transfer usage data to IT centers for international clearing of usage and billing records between network operators an service providers and their respective processing platforms.
FLAM's popularity in the banking sector grew rapidly not only with financial institutions, their associations and roof organizations, but also with services provided in the banking an stock exchange contexts. Those were joined by services for credit card clearing and card ordering and authentication.
Ever since, FLAM users don't need to know which platform will be used for reading a FLAMFILE. Therefore, you can decompress FLAMFILEs created on systems that no longer exist. In doing so, FLAM takes into account character encodings and data formats that are standard on the executing system. Optionally, you can enforce output conversion according to your specification.
The above is also true for original files that were archived in "the far IT-past". In order to be converted they need only be processed with FLAM.
Upon requests from users FLAM was extended by user exits. This allows individual user processing of the data whereever I/O operations take place. Hence, integration of FLAM into an application is no longer necessary.
In order to make integration of FLAM into applications easier and totally transparent, a subsystem was developed for accesses to MVS' DMS (Data Management System). That way, FLAM can be place like a driver between the application and MVS (add-on product). All you need is adding the parameter "SUBSYS=FLAM" to the DD-statement. This integration works with application doing sequential as well as such doing index-sequential processing.
One large customer deployed this for his day-to-day processing of bulk data - in the form of hundreds of VSAM files. This method of access solved several fundamental VSAM problems by the way. The savings in CPU time, disk storage, and elapsed time for a complex process are still phenomenal. This must be attributed to the fact that compression is done on self-contained units (matrices) where even variable length records are supported. FLAM works with a highly optimized VSAM key management of its own that also supports duplicates in the original keys.
During compression, FLAM can concatenate any number of files into one FLAMFILE (like members in a library). Needless to say that not only accessibility of every single member is retained, but also that of each of the self-contained units (segments/records) within a member.
File transfer programs (e.g. FTP, RVS, FTAM) usually only allow transfer of a single file. The above feature made it possible and still allows aggregating arbitrarily many small files into one file when creating the compressed FLAMFILE.
One customer, for example, was able to compress with FLAM 16,000 files from an old archive into one single FLAMFILE that could be read compatibly by all systems/platforms supported by FLAM. File transfer efficiency is dramatically increased by this approach when a multitude of files needs to be transferred to one or several different addresses.
You automatically work "end-to-end", which means that the sender's client system is connected to that of the receiver. Features like de/compression at file transfer/server can be suppressed. Therefore, the receiver need only decompress an individual member when it is requested. At that occasion conversion functions may be used. - Later on, with the advent of the AES standard, also encryption/decryption became possible.
A dedicated application for encrypted data exchange needs a solution that does not require re-transmission of the entire file (that may be very large) whenever the connection breaks down. This quality of the process must be true with regard to both compression and encryption. Being built of self-contained units , FLAM had no problem meeting this requirement. It allows back-positioning to such a compressed and encrypted unit and start the re-transmission from that point on.
Several years of research and development ended with the release of a highly efficient compression method, MODE=ADC (Advanced Data Compression), for self-contained segments filled with arbitrary data (up to 64 KB). This method allows compressing in a straight-forward manner, a feature important for integrating the algorithm into access methods.
Based thereupon the access techniques were extended by a very efficient retrieval scheme that allows searching in compressed data without decompressing. One bank, for example, is using this feature for selecting specific transaction entries (recently even XML-formatted SEPA transactions) from a huge archive compressed with FLAM. This works even when only parts (field contents) of a transaction entry are given.
An earlier custom solution for that bank allowed very efficient searching FLAM-compressed DTA files (legacy German payment transaction format) on magnetic tapes for specific transaction entries. That was realized with vertical compression and without AES encryption.
Upon customer request, we introduced the options of parallel and serial splitting of FLAMFILEs while they ar being created. The purpose was to limit the size of the data transmitted in one run or to enable parallel transfers over multiple lines when available.
The AES encryption standard was integrated into FLAM. This was done by encrypting the self-contained segments with derived keys. Despite the encryption such segments can be accessed directly as with index-sequential processing, they can be decrypted and decompressed. Various checks are implemented such as that over MACs (cryptographic checksums). Input data may be stored in a segment without compressing if the user only wants the data to be encrypted. Members of a concatenated FLAMFILE can be handled in the same manner. Even the search methods can be applied without having to decrypt segments.
There is an option to link FLAM to any encryption method and key-management system. This is done by using FKME (FLAM Key Management Extension). This feature also allows FLAM to connect to an HSM (Hardware Security Modul), if present.
If the user had to change the key and/or the encrption method, it would be enough to just change the FLAMFILE header. The encrypted segments/members including the MACs must only be copied.
Based on this service provider interface a PCIDSS-conforming solution was created for ordering debit and credit cards. This solution is meanwhile being used worldwide.
The add-on product, FKMS, was developed to solve problems with key generation, key management, and key distribution (authentication) for customers who don't operate their own cryptogrphic infrastructure. FKMS provides keys for a specific FKME on the basis of a simple role concept (RBAC).
IBM developed a co-processor (CPACF) that supports, among other options, AES standard encryption, saving CPU time significantly. FLAM was extended by a function that detects the presence of that feature and its availability. If that is the case, FLAM switches dynamically to hardware encryption. Otherwise, FLAM uses the conventional software-implemented standard AES encryption. Consequently, FLAM remains compatible with all FLAM versions supporting AES en/decryption.
Such an approach can be integrated into an application, for example on z/OS via the FLAM subsystem; for other platforms FLAM offers adequate interfaces. The implication is: you have now a perfect solution that makes sure that plain data created never leave the application and also decryption is never done outside the application. Consequently, only when plain data are actually needed they are decrypted and decompressed without the application noticing.
This is the ultimate solution for end-to-end security. By this, users meet the requirements of PCIDSS (Payment Card Industry Data Security Standard).
The extension of the notion of an access method by the concept of elements, enables FLAM, for the first time, to handle, beside physical file formats, also logical file formats (XML, ASN1, SWIFT, etc.). This means it can, for example, search FLAMFILEs for elements such as a bank-ID or do conversions between logical formats (SEPA, SWIFT, DTA).
Network-capability separates between compression and/or encryption and remote storage/retrieval of data in FLAM archives (secure cloud).
The component model allows easy replacement of methods (suites), reading from different sources (files, stream, database), transferring over different networks (IP, SSL/TLS, MQ), and storing data "flambéed" in different types of storage (file, cluster, database).
The support of open standards like UNICODE, GZIP, OpenPGP, Base64 for original data increases the scope of formats that FLAM5 can read and write dramatically. Beside the various I/O methods, up to 32 different conversions can be combined with a subsequent formatting of the data. As an example, in a jobstep under z/OS, you could read a file from USS in blockmode, then transform the data from Base64 to a binary format, decrypt them with OpenPGP, decompress them with BZIP2, change the character set from UTF-16LE to IBM1141, and write the result as text records with ASA-control characters into a VBA or FBA file under MVS. All this works without temporary files and each byte is touched only once - a fact that saves very much CPU time and temporary memory while security is improved.
The basis of all these developments is the self-contained unit of data that started it all, because vertical (columnwise) compression wouldn't have worked otherwise.