Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I respect jwz, but this is very much a step backwards, and while I appreciate his complaint about breaking existing APIs, this would be for programs near 20 years old targeted at a different platform.

OpenGL is a terrible, awful, crufty API, and the reason those methods were removed is that they are comically suboptimal. They do not reflect anything remotely like modern card capabilities, and their use directly causes harm to the ozone layer, kittens, and infants. I'm pretty sure that glBegin() gave a coworker cancer, and the matrix stack has claimed more lives than Kevorkian.

Building a shim to port over old OpenGL 1.3 apps is kind of like translating the Necronomicon into English--possible, of questionable utility, and likely to bring about insanity and demons.

As others have pointed out, OpenGL ES is not intended to be an extension of OpenGL--it was a chance to break out a lot of the dumb cruft that had accumulated into the API. Most of the features he's complaining about are either bad practice or should be gotten rid of entirely.

Compare the length of the API listings for GL 1.x, 2.x, and modern 3.x / 4.x. Remember that the whole thing is a hissing, clanking state machine, and that interactions between functions can be arcane--and threading presents additional issues.

Immediate mode rendering with glBegin()/glEnd()/glVertex()/glNormal/etc. is ugly. Any shim that collects that information still has non-trivial work stuffing it into a buffer, and the overhead of drawing anything with more than a few hundred triangles soon becomes absurd. Worse, this style of programming discourages storing geometry on the card, and that causes additional inefficiency--and trying to use those calls remotely over X causes all kinds of stupid as glx can barely do indirect rendering anyways.

Additionally, we have additional vertex stream attributes available now which are very flexible and don't map onto that anymore. It's time to let go.

~

tl,dr: jwz is complaining about a fork of an API that removed cruft people depended on, but the cruft needed removing. :(



jwz makes it clear that his deeper complaint is not about removing the cruft per se, it's about removing vast swaths of the API and then continuing to call it "OpenGL". If they'd called it "MobileGL" or some other nonsense, he probably wouldn't have liked the result (based on this rant), but he also wouldn't have complained that they broke working code.

EDIT: 'continuing to call it "OpenGL"' -> 'using "OpenGL" in the name at all' Effectively, he's complaining about a form of false advertising.


They didn't call it "OpenGL". They called it "OpenGL ES". The ES is a significant part of the name. The fact that the name happens in part to contain the word "OpenGL" is not a promise of source compatibility, and even the most cursory glances at the documentation would have made it clear that the API has a different name because it is a different API.

It's incredibly daft to argue that part of a name amounts to a promise of backwards compatibility with a different API, in perpetuity. The X11 protocol isn't backwards compatible with that of X9 simply because they both share a name-fragment.


Right, but again: those are the parts of the API that are already discouraged and downright deprecated as of OpenGL 2.0. If you were developing an app targeting OpenGL 2.0, following what I understand are the best practices for that version of the specification, OpenGL ES will require little to no porting.

The problem then, arguably, is not that OpenGL ES is called OpenGL but that OpenGL 2.0 added an entirely unrelated pipeline to that used in OpenGL 1.3: you could then claim "how dare they reuse the name for what is now two APIs stuck into one library". However, they shared a lot of underlying conventions, and they were honest about bumping the major version number.

If anything, the fact that jwz is happy he proved--that you can build an OpenGL 1.3 emulation library over OpenGL ES--would argue to me to /not/ include the 1.x features as part of the standard, but to instead encourage third parties to distribute such libraries. The fixed function pipeline wasn't removed from the API as it is unimplementable, but because it is a ton of obsolete code that 2.x coders avoid anyway.


" … but to instead encourage third parties to distribute such libraries."

What _I_ took away from jwz's rant (and agree with), is that if providing backwards compatibility to existing users is something one guy can do in three days (including doing the research to find out exactly what's getting taken out of an API), then it seems entirely reasonable to expect a "well behaved" standard like OpenGL to have provided the 1.3 emulation library themselves. Second best would be have a well defined deprecation period with appropriate warnings to developers, which they also failed to do according to jwz, or did properly in OpenGL 2.0 according to you. Whichever of you is right there doesn't _really_ matter much, since it's arguing over whether they got the "second best" thing right, when they seem to have failed at the "right" thing.


It's not something that can be done in 3 days. Fixed function on top of shaders is a PITA. If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

glBegin, glEnd are shit APIs given how GPUs work now-a-days.

Worse, things like flat shading require generating new geometry on the fly.

Fixed function pipelines suck balls.

OpenGL ES 2.0 FTW!


Finally someone actually works in the cg industry replies. +1 to this. No one really uses fixed function stuff these days, everything is shaders and vertex and index buffers. There are no fixed function hardware units, everything in the graphics pipeline is programmable and done in shaders. Even using fixed function stuff on today's hardware forces the driver to compile a built in shader. In the interest of keeping driver size small (for mobile apps), they force the programmers to write their own shaders and throw away the fixed function stuff that would bloat the driver and slow the shader compiler.


New code doesn't use fixed function stuff these days. JWZ's point is that there is more than new code. Legacy code also matters, e.g. CAD applications. Those have little use for shaders. Frankly, your point of view sounds very game-centric to me.

Both nVidia and ATI have committed to supporting these older APIs for the foreseeable future.


Old code doesn't just convert itself to using shaders and vertex and index buffers.

Also: old code isn't necessary un-useful code.


Maybe not - but imagine the loss in hardware sales and ecosystem revenue if everyone ported old shitty games without re-writing them, causing batteries to die quickly and a poor user experience?

It was for the better of the industry. Boo-hoo. If it took him 3 days then hes a smart fucker. As someone with plenty of OpenGL AND OpenGL ES experience, I'd say it would have taken him just has much time to port his existing code.


And if that were the end of the story, I think we'd be able to call it a day. But everyone has this funny expectation that that old code should keep getting faster with newer GPUs, in spite of the fact that GPUs don't work the way those programs were designed to use them.

Getting modern GPU performance, or anything close to it, through the crufty old immediate-mode API code is like drawing blood from a stone. Eventually developers need to take some responsibility for the code they're maintaining and migrate to a more modern API. Even on the desktop they'll have to do this - when their customers ask for modern GPU features, they'll have to move to OpenGL 3, which doesn't have immediate-mode either.


> If you want any kind of speed you've got to generate shaders on the fly based on which features you've turned on or off. Otherwise you create an uber shaders that's show as shit.

I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

He wants to trade developer time for execution time, something that may be very sensible in some cases (probably not in most, but for fancy screensavers...).


> I think jwz's point is that he prefers having his old code run very slow through a compatibility layer rather than having to port the same not-so-important old code over the new APIs.

If that's what you want, just write it yourself once (which he did) or use one of the many (subsets of) fixed-function pipelines running on OpenGL ES that others have made. The official 'Programming OpenGL ES 2.0' book even shows you how to do most of it, with example code included.

What jwz fails to recognize is that OpenGL ES does not only have to run on iPads, iPhones or other relatively high-powered mobile devices, but also on extremely low-powered devices with really small memory sizes (RAM and ROM) where every byte (code or data) counts. Compared to mobile devices at the time the first OpenGL ES API's were designed, an iPad could almost be considered a supercomputer. For OpenGL ES, small API size was one of the design constraints, simple as that.

Last but not least, OpenGL ES was supposed to become the industry standard for mobile 3D graphics, which means it needed strong industry support. Stuffing the API with loads of crap that almost nobody would use would drive up implementation costs for no good reason. Programmable shaders are called 'programmable' for a reason, if you want to do very specific stuff with them (such as emulating the fixed-function OpenGL pipeline), there is nothing preventing you to do so.

The single point I can kind of agree with is that maybe they should not have used 'OpenGL' in the name of the API, because it suggest at least some form of compatibility with previous OpenGL versions. Confusing indeed, but not really worth the kind of rant in this article.


OpenGL 2.0 was released in September 2004 (Wikipedia).

OpenGL ES 1.1 (which jwz complains about here) was ratified and publicly released in August 2004 (going back to OpenGL ES 1.0 would only make the comparison worse, of course): http://www.khronos.org/news/press/khronos-group-announces-th...

That's a quite, um, impressive deprecation cycle, I suppose.


Seven years is not a long time. My toaster is older than that, and I like to think the people who made that would be embarrassed if those cheap moving parts had decayed so quickly.

You probably have enough horsepower at your disposal to emulate each and every computer you ever bought (simultaneously!) and run all that software forever. But instead we're going to require any tool you want to use to be rewritten half a dozen times over the course of your career alone. And why? Because fuck you, we just can't be bothered to start taking engineering seriously.


If bread had changed as much as GPUs have in the past seven years, your toaster would be obsolete, too.

This isn't about good engineering vs bad, this is about mature technology vs a rapidly developing field. Different characteristics beget different engineering trade offs.


Cars are switching from petrol to hybrid to electric motors, but roads still work... "Rapidly developing" is a red herring.


Bad analogy.

Car engines and transmissions are the heart of the vehicle, and those change frequently as well. Roads are a fundamental, static, landscape feature today. They're like telephone poles and fiber conduits, neither of which have changed much in recent times.

Software APIs change to match the features/needs of the users and developers. Part of this is based on the changing hardware, part on desired features. The hardware today is vastly different than when OpenGL 1.1/1.5 was available so why should we be constrained to use it in the same fashion?

In short, APIs shouldn't be static for now and forever, we'd only be limiting ourselves and ignoring the fact that sometimes things change and sometimes early decisions were wrong (or less effective than desired).


Roads still work because they are too expensive to replace. GPU's and bread are not, so your argument does not make sense.


Expensive things are more likely to work? I can tell you've not worked long in this industry, my friend.


I think that paraphrase was slightly closer to the opposite of what tinco said than what he did say.


>Cars are switching from petrol to hybrid to electric motors, but roads still work...

Yes, and this is what makes this a bad analogy.


He's got the whole analogy inverted. If roads were rapidly changing, we'd need dramatically different cars to handle the new roads.


Touché: I apparently wanted "OpenGL 1.5", not 2.0. This, in fact, undermines part of my argument regarding the major version number. Further, reading through the history and timeline a bit better, I am now concerned I was horribly misinformed. I would just ignore my comment.


So OpenGL ES (Embedded systems) was not different enough? I actually think OpenGL ES didn't go far enough in the initial spec.

ES 1.1 still has alot of the old fixed function pipeline, but then in 2.0 they scrapped it all entirely, opting for a smaller, more modern API. That, to me, was a bad move. I think they should've made the shader based pipeline in 2.0 part of the 1.1 spec.


Interesting. I hear what you are saying.

To contradict everything I've said elsewhere on compatibility - I'm actually in favor of the 1.0/1.1 OpenGL ES standards and would have liked WebGL to offer 1.1.

My self deceiving justification in this instance is that plenty of platforms that can touch the web are still running hardware only capable of fixed function (eg. Intel 945GM).

Fixed function is easy to make safe, and offers a path to leverage 3d acceleration on these platforms that cannot be beaten in software. Sure its not as exciting as 3d with programmable shaders, but it is still useful 3d none the less.


Fixed function is no easier to make safe than shaders so that's a false assumption.

OpenGL ES 1.1 on JavaScript would be a joke. Do you really think JavaScript is up to tens of thousands of

   glVertex
   glColor
   glNormal
calls per model per frame?

The rest of the graphics world left OpenGL 1.x long ago. OpenGL 4.0 has none of the fixed function stuff in it anymore either.

Using fixed function features in 2012 is like using oldskool 80s BASIC with line numbers and no functions as your programming language.

It's time to move on.


I've delivered a paid contract that would say otherwise =)

(Not OpenGL ES1.1 but a similar narrow API. One frame latency penalty in my case as the calls enter a staging area for analysis one frame before tiling and dispatch).

Dont forget the original iPhone also has been fully sandboxed OpenGL ES1.x

So its already been done.

Things like PCC (NaCl) and Xax (to a lesser degree) also reveal surprising results as you would know.


OpenGL ES 1.1 != OpenGL 1.1

OpenGL ES 1.1 does not have support for immediate mode, thus no glBegin/glEnd and no glVertex etc. It's just VBO:s.

Do note that fixed function does not mean same as immediate mode (which is what the glVertex etc calls are all about).


Based on Wikipedia, the 945GM has a GMA950, which supports Pixel Shader 2.0. That should be enough for WebGL.

At that point, the issue is more likely to be driver quality. Allowing people to use weird fixed-function corner cases of the OpenGL spec would make problems more likely.


Are you kidding? The 945GM doesn't have fixed-function hardware! The entire 3D pipeline save for the shaders is implemented in _software_.

Intel didn't even add fixed-function hardware until the GMA X3000 series in the G/GM965 and GL960 chipsets. Hell, it looks like they removed it in everything since the i740.


But it's not OpenGL. It's OpenGL ES. Windows 8 is also not Windows 95.


> Windows 8 is also not Windows 95.

Windows 8 is backwards compatible with Win32 and even Win16. If it wasn't, there would be hell to pay. WinRT is a somewhat clean break with the Win32 legacy. But that's only required for Metro apps.


To be fair, 64-bit Windows 7 (and Windows 8) don't support Win16 (as I understand it, because you can't sneak real-mode code into x86 64-bit mode the way you can in x86 32-bit mode).

OTOH, let's be clear about what Win16 is. Win16 is an API that Microsoft deprecated in 1995 (not coincidentally, when Windows 95 was released). In other words, Microsoft took 14 years to go from deprecation to significant (partial) non-support.


One major reason there have been 32-bit versions of all Windows releases up to and including Windows 8 is so that corporations big and small will be able to seamlessly continue running their crusty Win16 and MS-DOS legacy applications. If not for that, many companies would not have been able or willing to upgrade. Microsoft could instead have taken a DOSBox or Rosetta style approach. Hopefully they will do that with some future release, so developers will no longer have to worry about 32-bit support.

So, I feel my original statement was accurate in both its letter and spirit.


Oh, I completely agree with the spirit of your statement. And my disagreement over the letter is over the interpretation of an unqualified "Windows 8" (or "Windows 7"), not about the substantive facts.

The reason I went into that level of detail is because I wanted to highlight the insanely long 14-year deprecation cycle. Is that a record for a deprecation that was eventually removed?


I think if you focus on the win{16,32} API, it doesn't do any justice to, for instance, the C library. FreeBSD has always supported binaries from various UNIX systems and I think this heritage goes back a bit further then 1995. Also, windows bundles a c library of its own which implements a significant subset.

Sure win{16,32} support a GUI with bells and whistles but that's because it evolved several years later specifically to support this newer mode of UI. It does not support 50-year old teletypes.


You misunderstand my point about length. I'm completely aware that there are plenty of APIs that have been supported far longer than Win<anything>. That's not what I'm talking about.

What I'm wondering is this: Has there ever been an API with a longer deprecation period than Win16? Remember, Microsoft announced the deprecation in 1995 (IIRC), but it didn't start to bite until 64-bit Windows mattered (you could draw the line at some server versions, Vista, or, as I do, at Windows 7). No matter how you slice it, that's a long time.


X11: 1987. Still works in 2012.


X11, not backwards compatible with X10 or X9.

OpenGL 4.2, backwards compatible to OpenGL 1.0 (1992)

OpenGL ES, not backwards compatible with OpenGL.

There's an obvious parallel here of APIs being compatible within themselves but not across boundaries between major architectural revisions intended to throw out cruft and target new environments.


> X11, not backwards compatible with X10 or X9.

That's why it's called X11 and not X10 ES.


This line of reasoning is utterly absurd. They are different APIs with different names. The specifics of the substrings they have in common and the format of the substrings that differ is utterly irrelevant.

OpenCL is a different API from OpenAL, Cocos-2d is an API for an entirely different language than Cocos-2d-x, the Cocoa API is wildly incompatible with Cocoa Touch. Horrors!

You determine whether or not two APIs are compatible (or even striving to be the same kind of API) by reading the documentation, not by applying stupid heuristics to common substrings in their names.


X11 wasn't deprecated in 1987, that's when it was released. So far as I know, X11 hasn't been deprecated.

Side-note: Win16 (called "the Windows API" back then) actually predates X11 (though not previous versions of X) since Windows 1.0 was released in November 1985.


Sure, MS took 14 years to deprecate this

Which it absolutely doesn't mean you should wait 14 years to stop writing for Win16!

And I think that's what the rant is implying


Ok Mac OS X Lion is not System 9. And no, Lion isn't backwards compatible to System 9 and there hasn't been hell to pay.


The difference is that tens of thousands of business users used System 9. The Microsoft platform probably had hundreds of millions.


The actual name of the OS changed to 'MacOS' as of MacOS 8.x. The 'System X' designation ended with System 7.5, sadly.


I'd be interested in seeing an argument explaining why these API calls smother puppies when the premise of the original article is that you can in fact offer them as an interface to the shiny new better way of doing things, without accidentally summoning cthulu. If he's wrong on that point I'd like to see a clear explanation of why.


I'm not an OpenGL expert, but my understanding is that the fixed function pipeline (FFP) is like a set of big, generic shaders and state that everybody had to go through to actually do the work of displaying 3D graphics. You could write shaders a fraction of the size that does just the work you need without having to pay the performance price for features you don't use. FFP-using code looks nice in tutorials, but performs terribly outside of demos, never mind the complexity tax it imposed for implementations as a 3D graphics layer for people who don't understand how 3D graphics works.

Also the old OpenGL API had immediate mode functions which encouraged people to trickle in interleaved data and operations; the exact opposite of what 3D APIs need to run fast.


The idea is very simple. With fixed pipeline you have a constant pipe diameter you could not change.

Imagine that you plan to manage 3Million vertex and to draw 6M points on the screen(fragments) so you make your pipes for it.

Now , what happens when you need to update only 200 pixels but want to draw 30 Million vertex on them? You can't do it on fixed.

What happens when you want to do 10 passes to the screen(60M points) but you just use textures with 4-8 vertex?. You can't do it on fixed.

With non fixed you could just use your compute units where you need them.


Sure, here's my attempt at explanation:

OpenGL is a gigantic mess, one which only somewhat recently has started to get better. For those that don't know, it's lineage goes back to IrisGL and big-iron Silicon Graphics machines. There's a wonderful recap of its history on Stack Overflow ( http://programmers.stackexchange.com/questions/60544/why-do-... )--long story short, design-by-committee and squabbling vendors (especially the CAD folks, whom I until recently counted myself among) resulted in bloated, sad, crufty APIs.

Having to maintain a codebase to mimic old OpenGL functionality, especially when in some cases it wasn't particularly well-defined/standardized, in addition to coming up with a small profile for new features on embedded systems, would present a nontrivial burden on the driver and hardware writers. Hell, even Intel has only somewhat gotten it right recently--and they've had the OS community via Mesa do most of the work for them (as I understand it)!

These aren't features that are hugely important, these aren't features that are game changing, these are a lot of things that simply obsolete or unnecessary. jwz laments the lack of quads support, so let's start there:

OpenGL 1.x supported the following primitive types: points, lines, line strips, line loops, triangles, triangles strips, triangle fans, quads. quad strips, polygons (see http://www.opentk.com/doc/chapter/2/opengl/geometry/primitiv... for examples). Several of these options are quite redundant, and supporting them is not really helpful. Moreover, several of them present interesting questions for a driver writer: what is the preferred way of decomposing quads or polygons? Strips? Fans? Discrete triangles again?

Sphere mapping has, I believe, been replaced with cube mapping. OpenGL ES 1.1 has cube mapping as an extension, but I don't know if Apple decided to implement it or not--such is part of the evil of OpenGL, this use of extensions.

1D texturing (and 3D texturing) were omitted, again presumably to make implementors' lives easier. To face this, fill a whole 2D texture with a gradient, and clamp on the edges when sampling (glTexEnvi I think should do this...?). Hopefully that would work. Only recently have 1D and 3D textures gotten really useful, for clever tricks in passing LUTs and such to the programmable shader pipeline; I think the older use for them was ghetto cel-shading and palette mapping--cool but not critical.

~

Anyways, the problem with requiring that the library writers support all that is again that they would have to create most of the OpenGL environment (which is terrible), and then map it onto their new environment (even more terrible), as well as develop the new environment. This is nuts.

It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

For a more timely example, consider the issues folks have had getting people to move on to Python 3--and contrast that with what the Rubyists have accomplished by just moving fast and fixing things as they break.

Or think about the amount of time/money spent on keeping the COBOL infrastructure up, or supporting legacy VB6 installations.

Honestly, sometimes we should applaud vendors for Doing the Right Thing and trying to force users into fixing outdated code.


> It's similar to asking if people could write a portability layer atop Win32 to support Win16 to support old DOS system calls--anyone can do a subset of that and complain that "Hey, it's easy!" but to do it right (and you must do it right, or else somebody else will complain!) is very nontrivial.

Didn't Microsoft actually do that? Isn't that how we have Win16 support in 32-bit Windows 7 today?


OpenGL is a terrible, awful, crufty API, and the reason those methods were removed is that they are comically suboptimal. They do not reflect anything remotely like modern card capabilities

But the guys who wrote it weren't idiots. They were in fact super smart engineers working at the cutting edge company of the day, SGI. And they made a philosophical call, which was that OpenGL should be an abstraction of geometry and an idealized rendering pipeline with just enough hardware-specific hackery in it to make it perform[1]. They did the best they could operating under the constraints of the state-of-the-art of the time and the resources they had available. And people still use OpenGL, decades later, and have done amazing things with it.

Isaac Newton said "If I have seen further, it is by standing on the shoulders of giants". Kids these days, talking about legacy technologies, would be wise to remember that.

[1] Whereas Microsoft believed in an abstraction of physical hardware, with just enough geometry in to make it useful.


Thats not at all true. The original OpenGL API is very much a direct mapping of the original SGI graphics hardware. Most openGL calls on SGI are single cpu instructions feeding the data to the hardware, which fully implements the whole OpenGL state machine.

Its just that modern GPUs work in completely different ways, so this kind of API is useless for them.


No, you have it backwards :-) SGI devised OpenGL and then implemented it in hardware, not vice versa!


Put some info about yourself into your profile so that graphics nerd colleagues (me) can learn about your very interesting work experience!


By the same reasoning, we should remove printf from libc. It's a terrible, awful, crufty API with threading issues, and it's overhead on modern windowed systems is just horrible.


Come now, be reasonable.

If we released some sort of libc for embedded systems, specifying only fprintf(), your analogy would be valid.


In fact, the concept of "freestanding implementation" (as opposed to "hosted implementation", which is a implementation of the full standard) exists in C, and is sometimes used in embedded systems:

  a conforming freestanding implementation is only required
  to provide certain library facilities: those in <float.h>,
  <limits.h>, <stdarg.h>, and <stddef.h>; since AMD1, also
  those in <iso646.h>; since C99, also those in <stdbool.h>
  and <stdint.h>; and since C11, also those in <stdalign.h>
  and <stdnoreturn.h>
(source: http://gcc.gnu.org/onlinedocs/gcc-4.7.1/gcc/Standards.html)

So yes, a conforming (freestanding) C implementation without printf for embedded systems can exist.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: