INTERVIEW


Interview With Bjarne Stroustrup

Conducted by Elden Nelson
Editor in Chief, VCDJ

 

Bjarne Stroustrup is the designer and original implementor of C++ and the author of The C++ Programming Language and The Design and Evolution of C++. A graduate of the University of Aarhus, Denmark, and Cambridge University, England, Dr. Stroustrup is the head of AT&T Labs' Large-Scale Programming Research Department, an AT&T Fellow, an AT&T Bell Laboratories Fellow, and ACM fellow. Stroustrup gave this interview in conjunction with SD 2000 last March, where he was a keynote presenter.

Looking Back...

If you were to start from scratch today and design C++ over again, what would you do differently?

Of course, you can never do a language over again. That would be pointless, and every language is a child of its time. If I were to design a language today, I would again have to make tradeoffs between logical beauty, efficiency, generality, implementation complexity, and people's tastes. It is important to remember how much people's tastes are conditioned by what they are used to.

Today, I'd look for a much simpler syntax—and probably clash with people's confusion between the familiar and the simple. I'd limit violations of the type system to very few constructs and have those clearly identified by ugly syntax (much as I did with the new-style casts; for example, reinterpret_cast<int>(p) is an ugly notation for an ugly operation). That way, it would be easy to create a safe subset by disabling the unsafe operations.

I'd try to create a relatively small core language—containing key abstraction features along the lines of classes and templates—and put much more of "the language" into libraries. However, I'd try hard to make the core language powerful enough for those libraries to be written in the core language. I do not like the idea of having standard library writers rely on extra-linguistic facilities not available to "ordinary users." I'd work hand on getting this core language very precisely defined.

Most importantly, I'd try to give the language a long gestation period so that I could make modifications based on solid feedback from real use before it was used by hundreds of thousands of programmers. This is probably the hardest of all because once something is obviously reasonably good and very promising, it will be used. And once in serious use, incompatible changes become extremely difficult.

I believe these ideas are very similar to those I used in the design of C++ and which still guide the evolution of C++, but updated by a decade or two. And no, I don't think anything like my "ideal language" currently exists.


When you formulated the C++ language, did you use ideas gained from other "up and coming" object languages at the time, such as Modula-2?

For C++, I used ideas from C, BCPL, SIMULA, ALGOL 68, Ada, ML, and others. I knew Modula-2—and at least a dozen other languages—at the time, but I don't recall any direct influence from Modula-2.

I wrote The Design and Evolution of C++ to answer questions about why things are the way they are in C++ (and why some things aren't). That is, to document the design decisions, principles, and tradeoffs, that lead to C++. I recommend D&E to people with interests in such questions.

What kinds of future enhancements to the language do you foresee? Are there things that will be removed?

Unfortunately, it is close to impossible to remove anything that's really worthwhile to remove. For starters, I'd like to ban C-style casts and narrowing conversions. Even that would probably be too radical—but compiler writers could issue strong warnings. I'd love to replace arrays with something like the standard library vector, but that's clearly infeasible. However, programmers can gain significant benefits simply by preferring vector over array in all application programming. I think the key point here is that you don't have to use the most tricky parts of C++—there already exist superior alternatives.

There is no major feature that I'd like to remove. In particular, I don't think any of the major facilities added to C to create C++ could be removed without doing harm. Often, people ask this question hoping that I'd single out a feature such as multiple inheritance, exceptions, or templates. So, I'd better be explicit that I consider multiple inheritance essential for a statically typed language with inheritance, exceptions the right way of handling errors in a large system, and templates essential for type-safe, elegant, and efficient programming. In each case, we can quibble about language details, but the fundamental concepts must be supported.

Currently, we are still learning about Standard C++. We are still developing new and interesting programming techniques based on the standard set of features. In particular, people are only just getting used to the STL and to exceptions. There are many effective techniques that are still underused by many programmers. We should not rush to add features to the language.

I think the current focus should be on the provision of new libraries supporting widely useful facilities more elegantly than has been the case in the past. There is a vast potential in that. For example, wide use of more elegant libraries for the support of concurrency would be a boon—the C-style threads packages are suboptimal. We could also use better binding to all kinds of "other systems" such as SQL and various component models. The numeric community seems to have taken the lead with elegant and efficient libraries (such as Blitz++, POOMA, and MTL—see www.research.att.com/~bs/C++.html).

Once experience has been gained, we'll be in a much better situation to decide what could and should be standardized.


With the inevitable shift to an increasingly Web-centric, distributed world, do you believe that C++ will remain as relevant as in the past? Will the single, "general-purpose" language be replaced by developers using several more specialized languages together (e.g., Perl and JavaScript)? What changes, if any, will need to be made to C++ or its standard library in order to support this new computing paradigm?

There never was a single language suitable for all work, and I doubt there ever will be. Real systems are always constructed using a variety of tools and languages. C++ was designed to be one language among many and one tool among many. As ever, a general-purpose language such as C++ will be complemented by special-purpose languages and tools wherever specialization yields significant benefits. That said, I think that much current use of "specialized" languages would be better done with C++ augmented with suitable domain-specific libraries. Much unmaintainable code is written in "scripting languages." However, that may have less to do with the languages chosen than with the "get the product to market yesterday" attitude that leaves no room for concerns about program structure, scalability, and maintenance.

I'm not sure that the majority of code will be "Web-centric." Even the systems that deal directly with the Web will consist mainly of programs dealing with local resources (such as IP connections).

Geographical distribution is a challenge to system builders, as is the high degree of concurrency needed in some server applications. Maybe we'll see libraries dealing with that standardized—naturally such libraries already exist. Most likely, some primitive operations and some guarantees would need to be added to the core language for better support of such libraries.

For the Web, and for networking in general, we badly need a genuine security model at the system/network level. Relying on downloaded "scripts" in languages such as JavaScript to behave reasonably is just dreaming. Please note that I do not claim that C++ provides a solution to this problem either. C++ is designed to provide efficient access to all system resources, not to prevent fraud.


Where do you see the future of the C++ language going? Do you think it will become obsolete in the next 10 years, stay just as viable in its current form, or evolve into something different?

C++ has a most promising future. You can write great code in it, and despite much hostile hype, it is the best language for writing systems where performance matters or where significant complexity needs to be addressed. I know of no language that approaches C++'s combination of general-purpose applicability, efficiency, and elegance.

I see no signs of C++ becoming obsolete. As far as I can determine, its use is still growing. Naturally, we'll see changes during the next 10 years, but not as many as seems to be implied from the set of questions in this interview. Like all living languages, C++ will evolve. When I talk with "language experts," the clamor for change is deafening, but when I talk with systems builders, the primary plea is for stability.

C++ will change, but hopefully as the result of experience rather than in response to fads. There will be minor additions to better support programming techniques that we are just learning to use more effectively, such as generic programming. There will be a lot of library building, and I expect that we'll see novel facilities to support better libraries. I hope to see the main extensions to be of a general nature supporting abstraction rather than ad hoc facilities supporting specific application-building tasks.

For example, "properties" is a useful application concept that I don't think has a place in a general-purpose programming language. The concept is easily supported in Standard C++ through a set of classes. If we find that a set of classes cannot support the ideal notion of a "property" well enough for our taste, we should not rush to add "properties" to the language. Instead, we should determine how to improve the class and template mechanisms to allow a library builder to closely approximate the ideal of properties. Maybe the right answer to the problem of "properties" is improved support for function objects.

For C++ to remain viable for decades to come, it is essential that Standard C++ isn't extended to support every academic and commercial fad. Most language facilities that people ask for can be adequately addressed through libraries using only current C++ facilities. Actually, an amazing fraction of the facilities that people ask for are already in Standard C++ and supported by the recent releases of their favorite compiler (whichever that is). For many C++ programmers, the easiest route to improved code is not through language extensions, but a through good slow read of an up-to-date C++ textbook.


What do you think of the current crop of scripting languages—especially Python, which is seen as an easier way to learn OO techniques than working with C++?

Some of these languages are nice. For example, I like much of what I have seen of Python. However, I'm not sure that it involves the same "OO techniques" you learn and use in the various languages. Naturally, every professional programmer should know several languages, and should be aware that programming and design techniques differ significantly between languages.

As far as I can see, the kind of systems you build with scripting languages are very different from those you build with a general-purpose language such as C++. When you learn to use C++ well and when you learn to use a scripting language well, you are learning skills that differ significantly. There is no common set of "OO techniques" that supplies most of what is needed for effective system building.


Are there planned extensions/changes for the standard C++ language to better support distributed computing?

No, and I don't think any are needed. Most can be done by better libraries. At most, we might need to add some low-level primitives and/or guarantees to the standard to support such libraries.

Is there a possibility that C++ will define a portable binary interface in the future?

If by "portable" you mean portable across incompatible hardware platforms and operating systems, then I think the answer is "no." Of course, we could define an interpreter or a virtual machine, but that would get in the way of C++'s strength in giving near-optimal access to system resources. What I do hope to see—and within a relatively short time—is platform ABIs. For example, there is an effort to define a C++ ABI for Intel's new IA-64 architecture (http://reality.sgi.com/dehnert_engr/cxx, http://developer.intel.com/design/ia-64/devinfo.htm). I think such efforts deserve strong support from the user community.

It would be nice finally to be able to link code compiled with different compilers on a PC.


Are you working on any new languages at the moment?

No. I'm still learning about how to use Standard C++ and I'm also experimenting a bit with distributed computing. I consider programming far more interesting than programming language technicalities. I think that you should consider designing a new language only when there is something you can't reasonably express in the existing ones, and C++ serves me well for most of what I'm doing.


In hindsight, do you think that making member functions non-virtual by default was the right decision? If you had the chance, would you change it?

Yes and no, respectively.

One of the things that has kept C++ viable is the zero-overhead rule: What you don't use, you don't pay for. Making member functions virtual by default would violate that and make it much harder to provide efficient concrete types. Making virtual the default is "obvious" to people who think of classes as huge things living in complicated hierarchies. Virtual functions are in general unsuitable for crucial "small and concrete" types such as complex numbers, points, vectors, lists, and function objects. For such types, compact representation, inlining of basic operations, access without indirection, stack allocation, and a guarantee against undesired modification of semantics from overriding functions are key.

Also, if "virtual" was the default, you'd need a "non-virtual/final" keyword and get into problems with extensibility when you overused it. There really isn't any free lunch in language design.


How do you think the process of standardization through IEEE has affected the maturity, flexibility, and capabilities of the C++ language?

The ISO standardization was and is important to C++. Most importantly, the standard committees provide "neutral ground" where technical people can discuss technical issues. Where else could users and compiler writers from competing organizations such as Microsoft, IBM, Borland/Inprise, and Sun sit down and get joint work done to the benefits of their users? The ISO process is democratic and based on consensus. It takes time to build such consensus, but that is well worth the effort. The alternative seems to be a language definition crafted to serve the commercial interest on one company (or a few companies).

The Standard C++ that emerged from the ISO process is a better approximation to my ideals than any previous version. Exceptions are much as I designed them, templates emerged more flexible, and namespaces and runtime-type information were added. From the point of view of support for programming styles ("paradigms" if you must), the rest is details. Naturally, a major part of a standard committee's job is to precisely define those details.

Given the wide availability of implementations that approximate the standard, the time has come for people to experiment with the new facilities. Many things that didn't work a few years ago, now work. Many techniques that weren't realistic a few years ago, are now applicable in real applications. Many techniques that wouldn't occur to most people simply seeing the language definition, have been developed. For example, the STL (the standard library's framework of containers and algorithms) is a good source of interesting new techniques.

Naturally, you shouldn't dash ahead and use every language feature and every new technique in your next critical project, but it is time to learn about the new language features and the new standard library, and to experiment to see what works for you and what does not.

For documentation, you can get the standard itself for $18 from ANSI (see www.research.att.com/~bs/C++.html) or a late draft for free. However, the standard is not a tutorial. For experienced programmers, I recommend my The C++ Programming Language (Third Edition), which presents the complete language and standard library in a more accessible manner. It also addresses many of the fundamental design and programming techniques that C++ supports. However, even that book is not for novices, so check my home page (www.research.att.com/~bs/) first to get an idea of whether my style and level of detail suit your needs.


C++ is fading in popularity in many circles because it takes too much effort to do basic things such as manage memory (no garbage collection), manage dependencies (can't create packages), manage versioning of components, and other features that are now considered standard in modern languages. Java, the much-rumored COOL language attempt to address to these issues. Would addressing these issues in C++ require substantial departures from the original goals of C++? How could C++ evolve so that those of us with substantial investments in C++ could leverage those investments, instead of having to start over with other languages?

I haven't noticed C++ being used less than before. On the contrary, the indicators I see point to the usual steady increase in C++ use. However, a steady increase in use from a huge base, an increase in standards conformance, an increase in portability, and improvements in libraries do not lend themselves to hype. I think the "fading" you are referring to is primarily a marketing/press phenomenon.

If you want garbage collection, plug a garbage collector into your C++ application. Good free and commercially supported garbage collectors exist for C++ and are in significant use (www.research.att.com/~bs/C++.html).

If you don't want garbage collection, it is worth noting that the standard containers alleviate much of the need for explicit allocation and deallocation. Thus, by using modern styles supported by modern libraries, you can eliminate most memory management problems.

These same techniques can be used to eliminate the more general resource management problems. Memory isn't the only resource that can leak. Thread handles, files, locks, and network connections are examples of other important resources that must be managed correctly to build a reliable system. If you believe that automatic garbage collection solves your resource management problems, you are in for a rude awakening.

C++ provides facilities that support general resource management. The key technique—"resource acquisition is initialization"—relies on function objects to manage lifetimes. This technique is supported by the language rules for partial construction of objects and by the exception mechanism in general. For a discussion of exception handling techniques, see the new "Standard-Library Exception Safety" appendix of The C++ Programming Language (Special Edition) which I have made available on my Web site (www.research.att.com/~bs/3rd_safe0.html).

C++ is much better than the caricatures of it offered by some overenthusiastic proponents of competing languages. In particular, I think that many "other features" are oversold. Often, they are easy to emulate in C++. Conversely, new languages have a tendency to stress new specific features at the expense of generality. This is one of the reasons that the size and complexity of a new language tend to triple in size between its initial launch and its acceptance as a useful tool for general computing.

The best investment a person or organization using C++ can make is in better understanding of Standard C++ and in modern C++ design and programming techniques. Too many people program in styles that belong to the mid-1980s, or even earlier.

Exactly where the programming language ends and the system/platform starts is a difficult issue. My view is that there should be an obvious boundary and that system dependencies should be kept out of programming languages as far as possible. System-specific and system-dependent libraries are the place for system dependencies, not language primitives.

I do not believe that programming languages should address issues such as component versioning. That is a systems issue that is best addressed in a programming language by providing a suitable library for system access. C++ has the facilities for doing that, so addressing such issues does not require a departure from my ideals for C++. Loading up C++ with a lot of special-purpose features, on the other hand, would be a departure, and a step back relative to the ideal of writing programs that are maximally portable and independent of system details.


Do you think that C++ class libraries have partially failed in their missions due to the fact that if you derive a class from a base class in the library and override a virtual function, you have to have the source code for the base class to know whether to call the base class's implementation of the same function?

Sigh. Some C++ class libraries have partially failed because their designers thought they had to design that problem into their libraries and because some users thought they had to use libraries that way. It's simply poor design, poor use of C++.

If you don't want to depend on data or code in a base class, don't put data or code in your base classes. That's what abstract classes are for. Consider:


class Reader { 
public: 
	 virtual bool empty() = 0; 
	 virtual Element get() = 0; 
}; 

This provides an interface to any class that defines the "Reader" functions in a derived class. A user is completely independent of details of those derived classes. In particular, user code does not need to be recompiled if a class derived from Reader changes. Also, a user can simultaneously use many different implementations of Reader (that is, many different classes derived from Reader).

Abstract classes have been directly supported since Release 2.0 in 1989, and the technique/style has always been possible. The history and language design considerations are described in D&E, and naturally, The C++ Programming Language explains when and how to use abstract classes.

Incidentally, implementing a class by deriving from an abstract interface class and a class from a hierarchy of classes providing useful implementation facilities is one of the simplest and most obvious uses of multiple inheritance:


class My_class : public Interface, protected Implementation { 
	// override virtual functions from Interface, 
	// implementing the overriding functions 
	// using facilities offered by Implementation 
	
	// Where needed, also override virtual functions 
	// from Implementation 
} 

I consider abstract classes one of the most underused features of C++. Programmers keep designing deep hierarchies with significant amounts of data and code in base classes. Sometimes that makes sense, but for major system interfaces where you want independence between parts of a program, the pure interfaces offered by abstract classes are usually a better design choice. Another problem with older C++ libraries is that their designers did not have templates available and sometimes—out of need or ignorance—used inheritance where type parameterization was more appropriate.


Why is there no "super" keyword in C++?

Because it is neither necessary nor sufficient in C++.

"Super" is not necessary because the Base::f notation allows the programmer to express the idea that f is a member of Base or one of Base's bases.

"Super" it not sufficient because you need to be able to express the idea that f is an f from Base1 rather than an f from Base2.


Some vendors have/are modifying their C++ compilers to support platform-specific language extensions. How do you feel about this and what do you think its effect will be?

I think that platform-specific extensions should be minimized, and where they are essential, they should be designed so that their use can be localized in libraries. Naturally, platform suppliers are prone to deem many more extensions essential than I am. They are also prone to provide extensions in ways that will permeate application code so as to make it very difficult for users to change vendors. As a user who values portability, I deplore such lock-in tactics.

For users, the ideal must be portability and the isolation of platform-specific code into specific sections of the application code. Portability and semantics that don't change at the whim of a vendor are the advantages of a standardized language over a proprietary language. I think C++ vendors should realize that can be a competitive advantage and minimize proprietary extensions and the impact of such extensions. If you want proprietary languages such as Java and Visual Basic, you know where to find them.


What is your opinion on the various C++ compilers available today? Don't let the fact that this is for Visual C++ Developers Journal influence your opinion!

They are getting better. All of them. I use six different C++ compilers on a regular basis. I couldn't have done that a few years ago. Then, some widely used C++ implementations simply weren't good enough for what I needed them for.

I will let the fact that this is for Visual C++ Developers Journal influence what I say. This is the perfect place to encourage Microsoft to get its act further together vis a vis the standard! VC++ has been improving, but Microsoft has the resources to further improve standards conformance and to provide a higher-quality support for core language facilities and the standard library. For example, as for most current C++ implementations, the messages from errors in templates leave much to be desired.

In the area of conformance, things are much better than they used to be, but I still miss template friends and partial specialization in VC++. I'd love to see someone implement separate compilation of templates—an important facility that I have not been able to use since the Cfront days.

It would also be nice if VC++ was shipped with a way for novices to start using the standard facilities. The following should be a trivial first program to get to run:


#include<iostream> 

int main() 
{ 
	std::cout << "Hello, new world\n";   
} 

In my opinion, a slight increase in the resources devoted to the standard library compared to the resources devoted to proprietary extensions and facilities would be the cheapest way for Microsoft to help the largest number of programmers.

The performance of the generated code is generally good. Implementations tend to differ based on the differing concerns of the user communities. I think that the most significant gains are to be had in tuning the standard library. For example, reading a sequence of characters into a string from an istream is an operation worth optimizing—if for no other reason than not to tempt programmers to fiddle around with character reads, explicit buffering, allocation, pointers, etc. For example, the following code should be as efficient as it is elegant:


vector<string> vs; 
string terminator = "endend"; 
string s; 
while (my_input>>s && s!=terminator) vs.push_back(s); 

See my paper "Learning Standard C++ as a New Language" (link on my "papers" page) for a discussion of style and efficiency.


Standard C++ doesn't define any of means supporting concurrency, persistence, and component-based programming. This has led to the proliferation of non-compatible, platform-specific frameworks such as CORBA, DCOM, SOM—all of which are unintuitive and kludgy. Isn't this a clear indication that Standard C++ should add direct support for concurrency (threads, in particular) and a component object model?

Concurrency and object models are clearly among the greatest challenges facing language designers (of any language) today. Unfortunately, a significant part of this challenge is political rather than technical. There is just too much money at stake for it to be otherwise.

The ideals for users must be a language that supports a wide variety of concurrency needs directly and a language that supports the general notion of a component well. Ideally, the language facilities are then mapped to a given component architecture with minimal effort from the programmer. This is a modern variant of the ideal of keeping applications reasonably independent of the hardware and operating system they run on.

This can clash with the ideal that each component could be written in any programming language and that each component should be designed to be trivially usable from any programming language. That is, some proponents of component models argue that programmers should consider programming languages interchangeable and write specifically for a given object model with the primitives of that object model highly visible in the code. These are not ideals I share. However, I suspect that an excellent compromise between the two ideals can be achieved.

From a naive programmer's point of view, a component to be called is a glorified abstract class: You get an interface from somewhere and use it through a handle exactly as you'd use other abstract classes through handles, pointers, or references. This can be neatly presented to a user without explicit use of language extensions and without the naive user needing to know which component model is actually used. For example, a handle class may take a name of a service as a string argument:


Printer_handle ph("d208d"); 

where "d208d" happens to be the name of the printer down the hall from my office.

In this way, system dependencies can be hidden from naive users and from most parts of large programs by exactly the same techniques that are used to keep the operating system and the hardware hidden from users of standard I/O. A naive user should never have to explicitly deal with "magic" of the component model—such as unique component identifiers. More sophisticated use will require more knowledge, will write platform-dependent code more often, and might have to make explicit use of language extensions supporting a component model. The main point here is that exposure to a component model could and should be gradual and dependent on the need of an application to directly manipulate the facilities offered by the model. In most of my code, I want to be a naive user.

Component models have much to offer, but currently their C++ bindings force the programmer to do too much work to use them, force the programmer to be too aware of the model, and often deprive the programmer of the richness of the programming language facilities. There is a tendency to encourage programmers to write in a common subset of all languages supported by a component model. The "language independence" offered need not mean that every interface has to be usable from every language. That is, interfaces need not be expressed in the lowest common denominator among supported languages.

When I'm writing a C++ program, I want to use C++ facilities such as the standard library vector, string, map, and list to communicate between my components. I do not want to lower my level of abstraction and use int*, char*, void*, and casts instead. Where a component is meant to be available from every language, I can either provide an additional interface that works at a lower level (as another abstract class; this is one of the things that multiple inheritance is good for) or provide routines in other languages that provide access to my C++ abstractions.

No. I'm not saying that you can do something like this on your machine today. I'm saying that this degree of elegance and flexibility is possible when using component models. I'm also saying that it is possible to provide it with minimal intrusion from system-dependent language extensions. I believe this ought to be the ideals of users. I would encourage providers of component models to try to approximate these ideas for the benefit of their users. Current C++ bindings are too intrusive and expose too much of the component model to its users.

Naturally, concurrency will enter into the picture. Again, I claim that this can be done relatively non-intrusively with a heavy emphasis on libraries. The direct language support can be at the primitive level and remain invisible to most users most of the time.

Another issue that must be addressed is security. I don't mean just type safety, but guarantees of system integrity and controlled access to resources. I suspect this cannot be done without support from the operating system and the component model. This is an area of very active research.


What are the chances of an existing de-facto standard such as pthreads to become an integral part of the Standard Library (albeit with an object-oriented interface)? After all, significant parts of the C Standard Library were also designed with Unix in mind, and its designers didn't attempt to remain platform-neutral.

There is a good chance that some concurrency support will make its way into the standard. Platform neutrality will be seriously addressed because one of the nice things about the committee is that it has representatives using many different platforms. The committee also operates under a mandate to strive for consensus.

In another interview, you defined the C declarator syntax as an experiment that failed. However, this syntactic construct has been around for 27 years and perhaps more; why do you consider it problematic (except for its cumbersome syntax)?

I don't consider it problematic except for its cumbersome syntax. It is good and necessary to be able to express ideas such as "p is a pointer to an array of 10 elements that are pointers to functions taking two integer arguments and returning a bool." However,


bool (*(*p)[10])(int,int); 

is not an obvious way of saying that. In real life, I'd have to use a typedef to get it right:


typedef bool (*Comparison)(int,int); 
Comparison (*p)[10]; 

Even here, the parentheses around the pointers seem redundant, but aren't. Just about any linear notation would have been preferable. For example, try reading this left to right:


p: *[10]*(int,int)bool 

("pointer to an array of 10 pointers to functions taking two int arguments and returning a bool"). If you want both linearity of the declaration syntax and C-style equivalence of declaration syntax and expression syntax, all operators must be suffix (alternatively, all operators could be prefix, but you can't have both in the grammar).

However, familiarity is a strong force. To compare, in English, we live more or less happily with the absurd rules for "to be" (am, are, is, been, was, were, ...) and all attempts to simplify are treated with contempt or (preferably) humor. It be a curious world and it always beed.

That said, programming languages are still far simpler than natural languages, so I expect that someday we'll get a better declaration syntax and then we'll wonder how "those old timers" could stand the absurdities they had to live with.


In today's world of processors that are thousands of times faster than when C++ was first created, developer time is now considered much more valuable than program execution time in many development shops. How, if any, should C++ change to recognize this fundamental shift?

Programmer time was always considered valuable, and we demand more than a thousand times more of our computers today.

I don't think C++ should be changed because of increased system performance. However, many programmers could benefit from that increase simply by relaxing a bit about low-level efficiency and worry more about higher-level structure and correctness. Doing so implies focusing more on abstraction and on treating C++ as a high-level language. Significant benefits in ease of programming, ease of porting, and ease of maintenance can result from that. There may even be performance benefits from that. Most of the serious inefficiencies in modern code is in poor use of system facilities and poor algorithms.

I cringe when I see C++ programs written as a mess of arrays, macros, casts, and pointers. Such programs appear to be assembly code written using C++ syntax. Simply using the standard library vector and string rather than arrays is a significant step forward.

Also, not every program runs on a conventional computer with a screen and a user waiting for feedback. Most programs actually run as part of embedded systems controlling our telephones, cars, cameras, and gas pumps. There, resources are often severely limited and C++'s ability to produce tight and fast code matters. Also, many server applications and scientific applications work at the limit of their hardware resources. For example, if you are implementing a climate model in C++, it is essential to have libraries that perform computations as fast as optimized Fortran.


Which features are due to be added to C++ in the next revision of the standard, which is to take place in 2003? Which features do you consider to be the most needed?

The committee has deferred the discussion of that topic until it has addressed the defect reports referred to it and issued a technical report on performance issues. In the meantime, people are encouraged to gain practical experience with the current standard and to experiment with ideas for improvements.

I think this is a good way to encourage stability, and to avoid diluting the limited resources of time and skills that the committee can rely on. Remember that the committee consists of volunteers who all have "day jobs." Given that, I don't want to be seen to try to preempt the committee with a specific wish list. You can get a the general idea of my views from other parts of this interview.


The newly approved C9X standard adds several constructs to ISO C, which are not supported by C++ yet. For example, the "long long" data type, "restrict" pointers, and several others. Will they be added to C++ in the future (if so, C++ will probably need restrict references too, right)? Do you believe compiler vendors should wait until the next revision of the standard, or should they implement these features as a non-standard extension?

The facilities added to C89 to add C99 will be considered in the context of C++. Undoubtedly, some will be adopted—just as some C++ features were included in C99. However, it will not be as easy as some people imagine. I believe that it is impossible to provide the full set of C99 facilities as C++ extensions without introducing incompatibilities with the current C++ standard.

A more serious and fundamental issue is one of language design philosophy. The C facilities were added to provide specific facilities (mostly Fortran-like support for numeric computation). This, I consider an old-fashioned approach, and it clashes with the approach taken by C++ and most other modern languages where the focus is on abstraction mechanisms. These facilities allow users and library builders to provide a wider range of specific facilities without complicating the core language.

Thus, C99 adds data types ("long long" and complex), specialized ways of initializing, conversion rules, and a new form of built-in arrays with special syntax. This contrasts to C++'s approach of using constructors to specify initialization and conversion, and to provide standard library classes to provide new types. The C approach increases the complexity of the base language, emphasizes the irregularity of the built-in types, and overlaps with what C++ provides as standard library facilities. This implies that if you add the C99 facilities to C++, a user will have to choose between C99-style dynamic arrays and C++'s vectors, between C99-style complex numbers, and C++ complex.

I consider this a failure of coordination of the evolution of C and C++ and a serious burden on the C++ community and on everybody who thinks of a C/C++ community. However, to judge from the net (always a hazardous thing to do), some consider the resulting hard-to-resolve incompatibilities a triumph of C-language independence. I think that a greater degree of coordination between the C and C++ committees is badly needed and that user communities and suppliers of C and C++ implementations could help by making their opinions heard.

"restrict" is a relatively simple issue. "restrict" could be added to C++ as a non-standard extension without doing harm. After all, a program using it would become a valid C++ program by simply macro substituting "restrict" to nothing. And yes, a C++ variant would need to provide restricted references also.

There are three reasons that "restrict" isn't part of C++ already:

  • The committee felt that enough was being added to C++ already so that new features should be minimized. An isolated feature such as "restrict" could and should be tried out in the communities in which it is most needed before it is added it to Standard C++.
  • The "restrict" feature cannot in general be verified to be safe by a compiler. The committee felt that more work was needed to see if a better-behaved alternative could be found.
  • The C++ Standard Library provides valarray as a mechanism to support high-performance Fortran-like vector computation.

The reasons for reconsidering "restrict" are:

  • "restrict" appears to be becoming increasingly useful as more processors acquire architectural features such as long pipelines and parallel execution. When "restrict" was first discussed by the C++ committee, maybe 5 percent of all processors used by the C++ community had the high-performance features that made "restrict" valuable. Today, I suspect that percentage is closer to 95 percent.
  • We have not (to the best of my knowledge) found an alternative that lends itself to static type checking in a general C++ environment.
  • The valarray part of the C++ Standard Library has not yet taken hold on a significant scale. If it does, valarray will have no problem coexisting with "restrict." Thus, the bigger issue is the coordination between the C and C++ national and ISO committees. However, every vendor of C and C++ implementations face a dilemma between conformance to a divergent standard and making the full set of facilities available in all contexts.

My guess is that the vendors will resolve their dilemma by providing a lot of compiler options. That would push the problem onto the users, who will eventually become unhappy with that and demand standardization. My main hope is that the default compiler options will reflect the C++ standard for C++ programs and the C standard for C programs.

In general, I'd like to encourage compiler vendors to ship their C++ compilers with their default settings enforcing the ISO C++ standard. Unfortunately, the Windows world still has a way to go in this respect.


Now that the dust has had time to settle on the C++ standard and Java has been racing forward, making rather remarkable strides, what are your views on Java vs. C++? What do you think Java must do to become as "good" a language as C++? What lessons would you like to see C++ learn from Java? Are there any features of Java (or other languages) you think will drive adaptations of the C++ language?

I don't compare languages. Doing that well is hard and rarely done in a professional manner.

I think that C++ will evolve based on problems encountered by its users and according to its own internal logic. As ever, a host of ideas from other languages will enter into the considerations, but you can't simply lift a facility from one language and graft it onto another. You have to look at the techniques and concepts supported by a facility and then see how best to support those techniques and concepts in C++.

Sometimes the best thing to do is simply use more than one language. There is no one language that is sufficient for every application and every programmer. C++ is and will remain one of the best languages for a wide range of people and application areas. However, we should not fall into the trap of trying to add every possible feature to it in an attempt to please everyone. I think that Java and C++ are and will remain very different languages. They are syntactically similar, but their object models differ significantly.

To me, it is very significant that C++ has an ISO standard whereas Java is a proprietary language.


In the early Java years, there was a lot of hype about it being the ultimate programming language that would displace C++. In your opinion, what has been the effect of Java on the C++ development community in the last two to three years?

There is still an amazing amount of Java hype around. Despite Java's track record over the last five years, hordes of enthusiasts seem to believe that Java will soon displace not only C++, but also just about every other programming language. On the other hand, C++ use still appears to be growing. I suspect the main effect of Java on C++ has been to divert effort that would have led to better C++ tools and libraries into building the Java platform and several Java toolsets. Very little in Java is new to a student of programming languages, so there has been little effect on the C++ definition. In that area, Java is still playing catch-up (for example, I think it is just a matter of time before Sun will add a template-like mechanism to Java).

People ought to realize how different the aims of Java and C++ are. Look at the design criteria for C++ (for example, in D&E) and you'll see Java isn't even close. On the other hand, C++ clearly isn't a good match for Java's aims either.

Maybe I should end this interview by pointing out that C++ is still my favorite language and that it has no match when it comes to writing code that is simultaneously efficient and elegant for a wide range of application areas and platforms.