You are viewing illiterat

James Antill - Post a comment

Feb. 7th, 2006

10:46 pm - HTTP for desktop applications

So to continue the HTTP theme, Miguel's idea that desktop applications communicate via. HTTP seems completely insane. I can only presume Miguel was not involved at all in the Mono HTTP API implementations. As part of writing my webserver, I've already written about how terrible the HTTP spec. is ... hell apache-httpd blatantly ignores significant parts of it.

It's not like there's even a hope that by "HTTP" Miguel meant "METHOD <path> <version>\r\nHeaders: value\r\n\r\n" ... much like people often do when they talk about "XML" but really mean "something in readable text deliniated by angle brackets". Because he want's to be able to "reuse" existing implementations. So, here's my explanations of the list of "benefits" to using HTTP:

HTTP is a well known protocol
HTTP is a well mis-understood protocol, it has a huge array of parsing pitfalls. And that doesn't even include the number of things people screw up, that you should work around if you want to be nice. Basically all HTTPDs have had security bugs in their HTTP parsers, and-httpd is a big exception here ... and even then I'd never suggest I have implemented HTTP bug free.

There are plenty of HTTP client and server implementations that can be employed
There are plenty of broken client/server libraries, and there isn't even a well defined way to communicate "here's a list of stuff" without resorting to complete insanity like XMLRPC or WebDAV. Also hooking anything into more than one server basically means writing CGI (or, possibly FastCGI) or doing multiple implementations.

The protocol can be as simple or as complex as needed by the applications. From passing all the arguments on a REST header as arguments to the tool-aware SOAP
Riiight. Show me a single HTTP client that does SOAP requests which degrade to REST. You basically have to pick one, so you go from clients needing TCP+MYPROTO to TCP+HTTP+REST+MYPROTO ... and this is better? ... for something 99.999% of the time is going to be machine local?

HTTP works well with large volumes of data and large numbers of clients
hahahaha. Apache dies easily with small numbers of clients, and has only got LFS support in 2.2.0.

Scaling HTTP is a well understood problem
Maybe ... but it's not easy. And the scaling currently is all to do with "browser HTTP", Ie. GET is basically the only method used ... if GNOME introduces random methods for different things in the desktop most of the current scaling knowledge goes --> that way.

Users can start with a REST API, but can easily move to SOAP if they need to
This is a repeat of problem/benefit #3 ... again implying that having random crap at the end of the HTTP transport is any more useful than having a MS word file containing a single element with base64 encoded data in it.

HTTP includes format negotiations: clients can request a number of different formats and the server can send the best possible match, it is already part of the protocol. This colors the request with a second dimension if they choose to.
Sure, and how many HTTP servers/services currently implement this? I've not seen a CGI that does, lighttpd can't do it, apache-httpd has .var files (which are unmaintianable IMO) and "multiviews" for normal people (which you shouldn't enable as then performance becomes horrible). There's also the problem that most of the browsers don't implement it well either (Ie. firefox "prefers" XML and XHTML over HTML, even though it renders HTML better (and faster) ... hell, it doesn't even render link tags in XML) so it's not exactly been tested a lot in the field.

Servers can take advantage of HTTP frameworks to implement their functionality on the desktop
I think this is a "free remote desktop" play, now with caching and goes through firewalls! Because that's the only reason people aren't running remote apps. on their home machine, at work.

It is not another dependency on an obscure Linux library
Yeh, I'm sure most people know what libcurl or libneon are ... and anything else that actually solves the problem is going to be shipped by everyone that cares. Plus you'll need something better than "just use HTTP and link with curl" ... which will be in it's own "obscure Linux library".

The possibility of easily capturing, proxying and caching any requests
Doing captures, is a benefit ... and I can almost imagine designing a protocol over HTTP for that (of course nothing will decode the bit over HTTP ... but it gets you some way there). Proxying is just another battle in the users vs. IT dept. war ... and I can't see the users winning that one. Caching is laughable, most web apps. are terrible at caching ... and most of the simple "HTTP clients" like robots and Atom readers aren't implementing Accept-Encoding/If-None-Match/If-Unmodified-Since (in spite of god knowns how many words on the subject).

Redirects could be used to redirect request to remote hosts transparently and securily
Maybe, although I don't see what is secure about it. And, again, most of the clients already have problems differentiating between 301, 302, 303 and 307 ... AFAICS firefox does the same thing for all of them.

Current Mood: scaredscared

Leave a comment:

No HTML allowed in subject


Notice! This user has turned on the option that logs your IP address when posting. 

(will be screened)