Seeing things the way in which one wants them to be (not the way they are)

DabbleDB’s Really Bad URLsThroughout history there have been people who only saw things as they wanted them to be. People with strongly held beliefs whose values guided their actions be they counter-productive, detrimental, or worse just plain wrong; nothing mattered but to believe the world was as they wanted it to be. And it’s not just those from the past that are guilty; nay it seems that people the world over are now more ideological than any period in my own lifetime. I’ve give many examples, but whatever examples I gave I’d be sure to offend over half of my readers!

It’s probably pretty obvious that I’m talking mostly about war and religion in the above but it’s also sad to see the same from technologists. Case in point: the creators of DabbleDB and the Seaside Framework. There is an unfortunate school of thought among some web developers that Avi Bryant evidently shares[1] that clean URLs are simply not important, that they are just the obsession of overly pendantic developers pursing unimportant elegance. And those opinions are often rationalized by statements like these (on Mike Pence‘s blog) that clearly exhibit confirmation bias:

“I have not had one single person ever mention in Dabble DB that the URL’s look funny. People are used to it. If you go to Amazon.com, the URL’s have all kinds of opaque identifiers in them. It is just not something that the average user cares about. I think it becomes an obsession for developers to have this sense of having a clean API exposed by their web application, but I think you can have a clean API that does not have to include every single page in your app, and I don’t think that every single page in your app has to be bookmarkable. I think that as long as a bookmark gets you back, roughly, to where you wanted to be, or for really crucial things to have permalinks, then you are fine.”

Well I guess Ari can’t say that anymore (that he hasn’t had one single person complain about DabbleDB’s URLs.) 

That said, why would Ari believe URLs to be unimportant anyway?  There is significant evidence all over the web that URLs are important, not the least of which has been document on this blog in the past. As best I can tell Ari’s regrettable belief occurs because of his desire to be unburdened from dealing with web architecture so that he can hoist highly stateful web apps onto an unknowning and unsuspecting public simply because that’s what Ari values. Basically Ari chooses to ignore the importance of URL design for both users and good web architecture and have his framework emit simply awful URLs simply because doing so makes coding and using his server-side framework so much easier. That’s similar to someone not addressing the unfortunate necessity of security simply because dealing with security is a PITA. (BTW, Amazon’s URLs are some of the worst and they only get away with it because of their early momentum. They are NOT a good example to emulate.)

So you see DabbleDB exhibits some very clear examples of really bad URLs. To see for myself I created a free trial account over at DabbleDB, which gave me my own well-designed URL (itself, not bad):

http://welldesignedurls.dabbledb.com/

Next I created an application called “Sites” and a first “category” that I named “Domains” (evidently in DabbleDB parlance a “category” is like a table to us relational database types.)  This gave me the following URL:

http://welldesignedurls.dabbledb.com/dabble/sites?view=2&_k=ZEiTkHyn

Not bad, but the “/dabble/” is unnecessary, the “view” could have been defaulted, and the “_k” is, well, is so gratuitous I doubt I need even further criticize it.  Clearly what I would have preferred to see is this:

http://welldesignedurls.dabbledb.com/sites/

Or at least:

http://welldesignedurls.dabbledb.com/apps/sites/

And I believe anyone would be hard pressed to explain why the actual URL DabbleDB uses is better or why the URL I proposed would not be workable. Still, all is not so bad to this point because it appears DabbleDB will respond appropriately to:

http://welldesignedurls.dabbledb.com/dabble/sites

Of course anyone bookmarking the URL vs. composing the URL for a blog will be linking to two different URLs as per web architecture, which has its own perils for the owner of the website. (I’m of course assuming public URLs for this use-case which is possible via DabbleDB Commons, itself having a great URL of http://dabbledb.com/commons, but many usability problem still exist in closed environments how most DabbleDB databases will be used.)

But matters get much worse when we drill down into the “Domains” category I created. Compare the following two URLs and guess which one I envisioned vs. the one Dabble generated:

http://welldesignedurls.dabbledb.com/dabble/sites?view=2&_k=qPDotnwm

http://welldesignedurls.dabbledb.com/sites/domains/

And if we click on the name of the domain “welldesignedurls.org“, if gets even worse:

http://welldesignedurls.dabbledb.com/dabble/sites?entry=7&view=2&_k=jGMmkZyZ#objectEditor

Again, I would have liked to have seen:

http://welldesignedurls.dabbledb.com/sites/domains/welldesignedurls.org/

Why is this important?  Because cognition of the meaning of the URL is used in a significant number of contexts by humans, often in the context of where only recognition (vs. URL construction) is important. In email, in the browser history list, in older bookmarks, on printed communication, and more.  By analogy imagine how much harder computers would be to use if users had no choice but to always navigate the tree structure of a deeply nested directory instead of simply copying and pasting the path from, for example, Windows’ Explorer or the Mac’s Finder to an Open File dialog[2]. Just imagine what it would be like if a path to the user’s directory was named “C:\%GSkstyrWshs\@9KBHasklp\” Ye-Gads!)

There are still further cases where clean URL design is important. For bloggers composing their links having the ability to learn a link structure rather than having to navigate to each page they want to link (such as on Wikipedia) is invaluable. For marketers wanted to convey a location in advertising for their customers and prospects to visit. And especially for users of web that are heavily data-oriented where users are involved in editing, navigating to, and communicating various application states (a.k.a. web pages) to their colleagues, such as an app like DabbleDB. If ever there was a category of web apps where good clean URL design is critical, it would be online databases!

So NO Ari, URL Design IS important. I hope you can learn this and make changes to DabbleDB and Seaside before it’s too late for you, and worse, for your users.

Footnotes

  • 1.) How ironic Avi would name his blog “HREF Considered Harmful” as HREFs are truly one of the core foundations of web architecture.
  • 2.) Yes I know that some people don’t ever ccpy and paste paths but many of the more intelligent and/or aware users do.
This entry was posted in Everyone, SoapBox, The Unenlightened, URLs that Suck, Warnings. Bookmark the permalink.

17 Responses to Seeing things the way in which one wants them to be (not the way they are)

  1. Ramon Leon says:

    You seem to be under the assumption that every web site wants to be indexed and have hackable urls. This isn’t the case. Avi isn’t against clean URLs, he’s against the blind use of needlessly clean urls in contexts that don’t need them. In the case of Seaside, that’s complex workflow based web applications. Seaside can and does do clean URLs, when you want them, it simply doesn’t require them.

    If you’re running an ecommerse site, you might want your products catalog behind a clean restful url structure, but you might not want or need your checkout process to be linkable.

    Avi knows your position, but from reading your article, it doesn’t seem you know his. You also make the assumption that the web is just for websites. The truth is the web has become a platform, and the technology used on every size network, public and private.

    Desktop applications are moving into that platform, and Seaside is one of those frameworks. It’s not trying to build web sites, it’s meant to build desktop style apps with desktop style programming. The web architecture is not always desired or appropriate, and often gets in the way of solving the problem at hand. Avi’s position seems perfectly pragmatic.

  2. Ramon,

    Thanks for the comment.

    No, I’m not under the assumption that every website wants to have indexed and hackable URLs. Frankly, I don’t care as much about what they website owner wants, I care about what the user wants, and the user wants hackable URLs and just as importantly URLs that can be composed by reasonable logic especially for apps that the user uses frequently such as DabbleDB.

    If Seaside can do clean URLs, that is awesome, but that’s not how Avi presented it when he ranted against clean URLs. Consequently, I ranted back. And I wouldn’t agree that Seaside is for only complex workflow; from what I can see it is being presented as a general purpose tool just like Ruby on Rails.

    And yes, I agree with you in part about the catalog vs. the checkout process, but not entirely regarding the checkout process. For example, the Add-to-Cart link should be a clean URL (as a side note, if you had a link like http://example.com/cart/add/12345/ that should respond with a page that shows item #12345 and a link that asks the user to confirm adding to the cart in order to be RESTful as a GET request shouldn’t place an item in a cart.)

    No, I haven’t read everything Avi’s ever written or heard everything Avi’s ever said but I did look for a moderate view from him regarding URLs and wasn’t about to find such in the 30 minutes I spent looking. What I know about Avi’s position is what I read on Mike Pence’s blog and what I was easily able to find on his blog and elsewhere on the web. Nothing I read from his was moderate on the subject.

    Please understand that Avi is not the only person spreading “clean URL’s are not important” dogma, and I wrote this as a polemic against that belief. Avi was just conveniently the most recently visible person to make such a strong statement and most discussion I’ve seen on the issue has been in mailing lists and not on blogs so this was a great opportunity to refute.

    I do NOT make the assumption that the web is just for websites; you are the one making the incorrect assumption. I believe clean URL structures are as important for REST-based services as they are for websites. Clean URLs are an indication of a well designed architecture and are especially important when using the web as a platform.

    Lastly, I see the potential evolution towards “desktop applications” on the web given the vision you mention as a really bad thing. The web works well because of the set of principles documented in the W3C’s Web Architecture (linked above) and web apps that subvert that architecture do so not only at their own peril but more importantly the peril of the web in general.

    To paraphrase George Satayana: “Those who ignore the importance of HREF and the URL are doomed to foster chaos and undermine the web.”

    The URL and content that is accessible has adds orders of magnitude more value to the web and for than owner of the site than the insular apps which you propose because of network effects. But most people like whiz-bang and glitz, and most don’t appreciate network effects so they go for what’s pretty but not what gives them the most value. BTW, that is one of the reasons I so dislike ASP.NET.

    Please realize I am not saying that dynamic apps in and of themselves are bad for the web, only that they need to be carefully designed so as to respect the architecture of the web. And reading Avi’s rant, it didn’t seem to me that he respects web architecture very much.

    That said, I really do hope I did misunderstand and that I learn Avi actually does respect the core architectural principles of the web. If not, everyone he influences will be worse off because of it.

  3. Pingback: Avi Bryant's selective vision? « Mike Does Tech

  4. Ramon Leon says:

    I understand your position, but all of your arguments are based on the idea that REST is the only correct architecture, and that simply isn’t true. REST is an architecture, one among many, and each has advantages and disadvantages.

    Every URL simply does not need to be restful.

    “I care about what the user wants, and the user wants hackable URLs and just as importantly URLs that can be composed by reasonable logic”

    The user isn’t always the primary concern, sometimes security is a concern and you explicitly don’t want hackable URLs. Sometimes content scraping is the overriding concern, and you explicitly don’t want hackable URLs.

    “Clean URLs are an indication of a well designed architecture and are especially important when using the web as a platform”

    Clean URLs are not the only indication of a well designed architecture, nor are they strictly necessary. They are nice, not necessary.

    “Lastly, I see the potential evolution towards “desktop applications” on the web given the vision you mention as a really bad thing.”

    Well, here’s where we disagree, most of us programmers see it as a really good thing, because desktop applications are in general far superior and far more complex than web applications, and the RESTFUL web architecture does not lend itself to building highly dynamic component based applications that don’t bow to the all mighty page.

    Not everything is a page, not everything needs to be linkable. Seaside is a component based architecture that makes building reusable and highly dynamic workflow based applications almost trivial, and it does so by downplaying the importance of both the page and the URL. This isn’t a mistake, it’s a conscious design trade-off that has enormous advantages for a certain class of applications.

    It’s not that clean URLs aren’t important, it’s that often, other factors are simply far more important, like time to market, lessening the lines of code necessary to do something, security, etc. The REST architecture simply isn’t always the answer.

  5. Ramon,

    Thanks again for the reply. Before replying again, see the last paragraph.

    Again, an incorrect assumption on your part. My arguments are not all based on REST being the only correct architecture (although I’m not necessarily saying that REST isn’t the only correct architecture; that is a discussion for another time and another blog.)

    No, my arguments regarding URLs is that many people form an opinion that URL design is unimportant and then use that opinion to rationalize the use of poorly designed URLs. I believe that one should have a very good reason why URLs do NOT conform to good URL design as opposed to the reverse (which appears to be your and Avi’s position, tell me if I am incorrect.)

    As an aside because I am curious, you speak of REST having disadvantages but you don’t detail them. Can you please explain? Your assertion that “every URL simply does not need to be restful” is only supported by your opinion; please give me valid reasoning why.

    You state that security is a reason for non-hackable URLs. Forgive me for being so blunt, but if you choose security-by-obscurity you are a fool. And if you are trying to avoid content scraping, you are best to not publish the content online as any content published online worth scraping can and will be scraped. People who choose these strategies are dinosaurs that don’t understand the technical architecture of the web.

    I agree that clean URLs are not the only indication of a well-designed architecture, but just because a car needs more than engine doesn’t mean that the engine is superfluous. You state that URLs “are nice but not necessary” I state the opposite and I support claims with need for human usability and value of solid architectural design; who do you support your claim? You did not give any reasons besides your opinion.

    You state that “most of us programmers see it as a really good thing, because desktop applications are in general far superior and far more complex than web applications.” In this you are doing my a huge favor. You are supporting the making the thesis of the title of this post better than I could support it myself! The ability to restore state based on a well-designed URL is intensely valuable, yet programmers who are more interested in developing with their pet architectures than making applications effective on the web are willingly blinded by their ideology.

    You state that “RESTFUL web architecture does not lend themselves to building highly dynamic component based applications” but you give not examples to explain your pontification. You added “that don’t bow to the all mighty page” which I don’t exactly follow.

    As an aside, I think one of the best things we could do for software on the desktop is to import the REST architecture. REST is about constraints, and REST has proven its ability to scale because it embraces those constraints. If desktop software were to embrace constraints in a similar manner we’d see highly scalable systems that are no brittle to change. I have for years yearned for a DRL (desktop resource locator) in my desktop apps to greatly improve my productivity, but alas all but Windows Explorer ignore this need.

    You say that “Not everything is a page, not everything needs to be linkable.” Please give me concrete examples. And while your at it, explain to me how caching works in your examples for things that are not pages and that don’t need to be linked.

    When you say “It’s not that clean URLs aren’t important, it’s that often, other factors are simply far more important” I will repeat “Those who ignore the importance of HREF and the URL are doomed to foster chaos and undermine the web.”

    You give “time to market” as justification. I don’t see how that’s relevent. You choose to use a framework that doesn’t support good URL design, and I’ll use one that does. Not a valid issue.

    You give “lessening the lines of code necessary to do something” as justification. I say “As many lines of code as necessary, and no more.” IOW, I can always reduce code by eliminating necessary functionality, but that’s a fool’s bargain.

    You say “security” is a justification. I’ve already debunked that above.

    You say “The REST architecture simply isn’t always the answer”, but again, you don’t give concrete examples why not.

    Rather than go round and round in circles, why don’t you present one of more examples that you think strongly argue for ignoring solid URL design. If I can’t debunk them, you’ll score points. If I can debunk them, you can keep digging up more examples but until you can find an example I can’t debunk, you’ll acknowledge that I’m ahead on the debate.

    P.S. I’d love to see you make these same claims on the [rest-discuss] mailing list, where people far more capable than I would be able to explain the falacy of your reasoning.

  6. Ramon Leon says:

    “I believe that one should have a very good reason why URLs do NOT conform to good URL design as opposed to the reverse (which appears to be your and Avi’s position, tell me if I am incorrect.)”

    You’re incorrect, you assume we don’t have those good reasons, but we do.

    “The ability to restore state based on a well-designed URL is intensely valuable”

    URLs alone are simply incapable of representing the necessary state to do some of the things Seaside does directly. The reason for this is simple, not all state is serializable. Specifically, transient state in continuations that capture state directly from the environment they were invoked in.

    If I have an anchor on a page that when pressed, refreshes the contents of a div within that same page, I could try and come up with some meaningless URL logic for it, but why should I? What I really want is to attach code directly to a user action, not make up some URL that when parsed correctly, still can’t store all the necessary state of the sequence of actions that decide what actually shows in that div. A wizard for example, or a color picker that when selected, returns the color to the previous component that was displayed in that div.

    “Those who ignore the importance of HREF and the URL are doomed to foster chaos and undermine the web.”

    Those that pretend that state doesn’t exist, are doomed to be forced to manually encode and decode it from URLs. What you see as undermining the web, we see as improving it, taking it to a new place and making it even more useful than it already is by introducing a simpler architecture that makes writing complex applications vastly simpler.

    REST is useful for page level resources that “can” be cached and are valid entry points into an application. Not all URLs represent long lived resources, some represent transient resources like [this block of code] that have no meaningful need to be represented restfully.

    “You say “security” is a justification. I’ve already debunked that above.”

    No you didn’t, you simply assumed I meant security through obscurity, but what I actually meant was more like security through capabilities, a proven better model. I know you accessed something securely because the only way you can get there, is through a random URL I gave you that’s bound to your current session. If URLs aren’t hackable, and resources are bound to random URLs rather than predictable ones, I no longer have to “secure” that resource with extra code that ensures the caller is authorized. Unlike REST, this can be automated.

    You don’t see how many of the things I mention are relevant, because you seem to have no experience with a continuation based framework like Seaside. I’ve used both restful frameworks like Rails, and continuation frameworks like Seaside, and let me tell you, Seaside is vastly more productive and enables a whole new level of applications that would simply be too complex and involve too much code in a RESTful framework. Time to market is absolutely a valid concern, but you don’t seem have the experience with a framework that’s shown this to you, so I can’t convince you.

    So I’ll restate my original assertion, you’re arguing against something you don’t have experience with and don’t understand. You clearly understand REST and it’s benefits, but you don’t, won’t, or can’t seem to see that there are trade offs involved in choosing that approach that make application development vastly more complex than it should be. Meaningless URLs are useful because they allow automation in a way that RESTful URLs CAN’T. Automation allows the programmer to concentrate on solving the real problem at hand rather than spending all his time MANUALLY marshaling state around in URLs.

    I looked around rest-discuss a bit, the group seems barely aware of things like Seaside and continuation based servers. There are only a few posts discussing the issues. I don’t need to make an argument against rest, Seaside already does that quite well.

    Rather than go around in circles discussing this(though I have enjoyed the discussion), might I suggest you actually try out Seaside, and come to your own conclusions. There are valid arguments for not being restful, you’re not being very pragmatic if you can’t at least admit that other positions are valid.

    Seaside is possibly the best web framework ever written, and almost everyone who has actually programmed something non trivial in it agrees. Even DHH, Mr restful Rails himself, acknowledges that Seaside is probably the most interesting framework out there and worth learning something from. Rather than arguing against Avi’s views, you might be better off taking a closer look at what this guy wrote that has so many people fawning over his framework.

    You’re misinterpreting his position as just another misinformed soul who doesn’t “get it”, but that’s not the case. He’s not “seeing things the way in which one wants them to be (not the way they are)”, he’s making them the way he wants them to be, he’s inventing the future, rather then being stuck in the present. The way things are, isn’t always the way things should be.

  7. Ramon:

    Thanks again for your continued replies. :)

    Sigh. You are only rehashing your opinions. Give me examples. Actually tangible examples in code with explanations.

    ““I believe that one should have a very good reason why URLs do NOT conform to good URL design as opposed to the reverse (which appears to be your and Avi’s position, tell me if I am incorrect.)” You’re incorrect, you assume we don’t have those good reasons, but we do.”

    Are you saying you DO assume good URL design UNLESS you have a good reason for NOT? If so, why are clean URLs not the default in Seaside? If not, then I am correct.

    ““The ability to restore state based on a well-designed URL is intensely valuable” URLs alone are simply incapable of representing the necessary state to do some of the things Seaside does directly. The reason for this is simple, not all state is serializable. Specifically, transient state in continuations that capture state directly from the environment they were invoked in.”

    Examples?

    “If I have an anchor on a page that when pressed, refreshes the contents of a div within that same page, I could try and come up with some meaningless URL logic for it, but why should I? What I really want is to attach code directly to a user action, not make up some URL that when parsed correctly, still can’t store all the necessary state of the sequence of actions that decide what actually shows in that div. A wizard for example, or a color picker that when selected, returns the color to the previous component that was displayed in that div.”

    This example, which is still somewhat abstract, is not about URL design. In cases where you don’t need URLs, URL design is unimportant. But in cases where you do need URLs, design your URLs don’t just generate crap for use as URL parameters.

    That said, I would argue that there are many times when a developer thinks that a client-based state change does not need a URL when URL would actually be highly valuable to users and/or automation clients. But please give me concrete examples so we don’t continue to talk past each other, ideally with supporting code. I don’t know SmallTalk (though I studied it in 1987-1993) but I’m sure I can read it.

    ““Those who ignore the importance of HREF and the URL are doomed to foster chaos and undermine the web.” Those that pretend that state doesn’t exist, are doomed to be forced to manually encode and decode it from URLs.” You assume I am saying that all state should be in URLs. That is not what I am saying, and REST states to pass a representation as a state transition. I don’t see an example from you that shows where REST’s state transitions do not make sense.”

    “What you see as undermining the web, we see as improving it, taking it to a new place and making it even more useful than it already is by introducing a simpler architecture that makes writing complex applications vastly simpler.”

    I know that’s what you see. What I see is lots of unintended consequences from your idealism. I see your efforts as potentially disruptive as Jerry Falwell was in US political discourse for the past 30 years.

    “REST is useful for page level resources that “can” be cached and are valid entry points into an application. Not all URLs represent long lived resources, some represent transient resources like [this block of code] that have no meaningful need to be represented restfully.”

    Again, you give abstract examples. Let’s talk concrete examples.

    ““You say “security” is a justification. I’ve already debunked that above.” No you didn’t, you simply assumed I meant security through obscurity, but what I actually meant was more like security through capabilities, a proven better model. I know you accessed something securely because the only way you can get there, is through a random URL I gave you that’s bound to your current session.”

    That *is* security through obscurity.

    “If URLs aren’t hackable, and resources are bound to random URLs rather than predictable ones, I no longer have to “secure” that resource with extra code that ensures the caller is authorized. Unlike REST, this can be automated.”

    AGAIN, please give me a concrete example.

    “I’ve used both restful frameworks like Rails, and continuation frameworks like Seaside, and let me tell you, Seaside is vastly more productive and enables a whole new level of applications that would simply be too complex and involve too much code in a RESTful framework.”

    Are you sure it isn’t simply allowing the developer to more quickly produce the wrong thing?

    “Time to market is absolutely a valid concern”

    I didn’t say it wasn’t. I said your abstract examples did not support time to market as a justification.

    “So I’ll restate my original assertion, you’re arguing against something you don’t have experience with and don’t understand. You clearly understand REST and it’s benefits, but you don’t, won’t, or can’t seem to see that there are trade offs involved in choosing that approach that make application development vastly more complex than it should be.”

    Try me, as I’ve been asking repeatedly; INSTRUCT ME! GIVE ME CONCRETE EXAMPLES!

    “Meaningless URLs are useful because they allow automation in a way that RESTful URLs CAN’T. Automation allows the programmer to concentrate on solving the real problem at hand rather than spending all his time MANUALLY marshaling state around in URLs.”

    Until you give examples that prove otherwise, I believe that is simply you believing what you want to believe. BUT I AM OPEN TO REVIEWING CONCRETE EXAMPLES.

    “I looked around rest-discuss a bit, the group seems barely aware of things like Seaside and continuation based servers. There are only a few posts discussing the issues. I don’t need to make an argument against rest, Seaside already does that quite well.”

    So I guess I need to ask them about their opinion of Seaside?

    “Rather than go around in circles discussing this (though I have enjoyed the discussion), might I suggest you actually try out Seaside, and come to your own conclusions. There are valid arguments for not being restful, you’re not being very pragmatic if you can’t at least admit that other positions are valid.”

    Rather than burden me to learn a new language and a new environment, why don’t you just give me some concrete examples to support your claims? That will be far less burdensome on you than your proposal will be burdensome on me. It I had all the free time in the world, if might be different, but I don’t.

    “Seaside is possibly the best web framework ever written,..”

    Well, that’s not a superlative. ‘-)

    “…and almost everyone who has actually programmed something non trivial in it agrees.”

    It’s hard to verify the statistical validity of your assertion. The ones who Seaside appeals to (potentially ones for which the title of this post applies) agree, I’m sure. But there are also people who think Rails is possibly the best web framework ever written, and almost everyone who has actually programmed something non trivial on Rails agrees.” So who does that make right? It’s a religious war. (Note, I am *NOT* a Rails fanboi, just mentioning an alternate platform already given.)

    “Even DHH, Mr restful Rails himself, acknowledges that Seaside is probably the most interesting framework out there and worth learning something from.”

    I’m also not a fan of DHH.

    “Rather than arguing against Avi’s views, you might be better off taking a closer look at what this guy wrote that has so many people fawning over his framework.”

    I’ve looked for good articles from him but seriously have not found any. Care to provide a few links?

    “He’s not “seeing things the way in which one wants them to be (not the way they are)”, he’s making them the way he wants them to be, he’s inventing the future, rather then being stuck in the present.”

    And I would say those are one and the same.

    In closing there may be validity to some of Seaside’s approach, I don’t yet know. But what I do know is that violating REST indiscriminately, presuming URLs are unimportant, and generating state inside a client that is otherwise invisible to automation tools is usually a really bad thing to do. And if that’s Seaside’s default approach, it does so at its users and the web-at-large’s peril.

    SO GET ME SOME CONCRETE EXAMPLES, WILL YA? ‘-)

  8. Ramon Leon says:

    OK, first of all, that’s not security through obscurity. “A forgeable reference (for example, a path name or url) identifies an object, but does not specify which access rights are appropriate for that object and the user program which holds that reference. Consequently, any attempt to access the referenced object must be validated by the operating system, typically via the use of an access control list (ACL). In contrast, in a pure capability-based system, the mere fact that a user program possesses that capability entitles it to use the referenced object in accordance with the rights that are specified by that capability.” You need to brush up on capability security, because a random url tied to a particular user session, is a capability, not an obscure url that can be guessed. On this issue, you’re just wrong.

    Now, as for examples, I thought I provided them, I didn’t know you’d need actual code when the logical explanation is so obvious. However, consider the following method.

    go
    | age |
    [self confirm: 'Are you ', age asString]
    whileFalse:[age := age + 1].
    self inform: ‘You are ‘, age asString.

    This would show the user a page guessing his age and give him a yes and no button to press, if he presses no, the answer is posted back, and the loop continues adding a year to the guess until he presses yes. After pressing yes, he’d be shown a screen that says “You are x” with an OK button which would end the interaction. Now, this is a very simple example with only a single variable captured. Please try and see past the simplicity and see that it could get much much more complex.

    The thing to notice is that the code is not broken up into the several pages that it actually renders as, nor is the logic scattered about. Whenever confirm is called, a yes or no dialog is created, code execution is paused exactly at this point in the method and a continuation is created and assigned a random url that is then used in the rendered forms target. When the user submits the form, the continuation is invoked and the execution of the method starts right where it left off except now it has the return value from the user, which could be any object but is a true or false in this particular case.

    The state involved in this interaction can’t be encoded in the url, but it can be referenced as a resource in the current session. It has to be referenced as a resource because we’ve created a new continuation which has a unique identity of it’s own.

    This example creates a url on the form, but the same thing happens all over the code with anchor tags as well every time a tag is bound to a closure.

    html anchor callback:[self save: someCapturedObject]; text: ‘Save’.

    What’s to see here is that someCapturedObject is captured by the closure and allows Seaside anchors to call controller methods with actual objects as parameters rather than encoding their id’s into a query string to be parsed out later.

    Components can even call other components and pass actual objects as parameters

    html anchor
    callback:[self call: (Editor on: someCapturedObject)];
    text: ‘Edit’.

    And those components can return objects as results without knowing who the caller was. This continuation approach allows a whole new style of programming that wasn’t previously possible on the web and enables far more complex applications to be written that could be written otherwise.

    This all works because continuations are linkable resources, but only during the context of their lifetimes which is the lifetime of the session. For such short lived resources, going through the extra effort to try and make the generated URLs pretty just isn’t worth it. They are transient state and don’t need to be linkable.

    We’re no longer linking back and forth between pages, we’re linking back and forth between individual lines of code in an object graph. We might go through the effort of allowing RESTful urls to link to individual objects, but there’s just no way in hell it’s worth the effort to create RESTful URLs for every possible transient state of that object.

    In Seaside URLs invoke closures and continuations in the current session, the _s and _k parameters in the query string, the rest of the url is completely ignored. We can make it look like anything we like, and parse it out to mean anything we like. If you like, you can have every component add itself to the URL giving you something like.

    You could easily parse wordpress URLs for instance and lookup the necessary state from the title of the post. Seaside doesn’t take any abilities away from you, it simply gives you more options than you’d have in another framework.

  9. Jon Hanna says:

    “I understand your position, but all of your arguments are based on the idea that REST is the only correct architecture, and that simply isn’t true. REST is an architecture, one among many, and each has advantages and disadvantages.”

    Yes, but we’re already talking about the web here and REST is the architecture of the web. Even if you think REST is a complete disaster I’m afraid you’re stuck with it until there really is a “Web 2.0″ rather than that being a rather silly name for people going “oh, the web actually does work after all, whodathunkit?” (don’t hold your breath). If you’re on the web you’re using REST, the question is whether you’re doing it well, buggily or just suboptimally.

    I agree with you though that clean, readable, hackable, and otherwise not-completely opaque URIs are not a necessity. I agree also with Mike that “clean URL structures are as important for REST-based services as they are for websites” – exactly as important I would say; not a necessity but pretty darn useful a lot of the time.

    Indeed it’s a principle of REST that for the most part URIs are opaque (not just can be opaque, but *are* opaque as there’s always some process that hasn’t a clue what your URI design means) and that clients expecting a particular non-opaque structure are more brittle than those that don’t – however that doesn’t make the real advantages of URI design disappear.

    If nothing else readable, hackable URIs are another case of the principles of making code meaningful to a humans that we learn when we’re children. Just as “function checkwheels(){foreach(wheel in car.wheels)checkwheel(wheel)}” is better code than function a(){foreach(b in c.d)e(b)}, though it’s much the same to a compiler, so too http://welldesignedurls.dabbledb.com/sites/domains/welldesignedurls.org/ is a better URI than http://likjewr.dabbledb.com/sdfaoiwe/dsflisdfoiuw/sfdl?hoiasfe=1sef though it could be just the same to a web server.

    I’d rather developers didn’t tie themselves in knots about URI design as some seem to do, but that’s no reason to neglect the very real advantages they do have.

  10. Ramon Leon says:

    There’s also no reason to be limited by the insanity of readable URLs when there isn’t a clear way to create them for a continuation based approach. Just because REST is the architecture of the web as a whole, doesn’t in any way imply that the same architecture is necessarily correct for a single application. Applications and web sites are vastly different things. REST folks never seem to understand the distinction.

    How should I put this simply. The best I’ve heard it is this way, there are enterprise applications, and applications that run the enterprise. Google Search is an enterprise class application, Excel is an application that runs the enterprise. Their needs and scalability requirements are totally different.

    Dabble is a replacement for Excel, it’s scalability requirements are and will always remain low, and it’s intended to replace a desktop application. REST works best for enterprise applications while continuations and opaque urls work best for applications that run the enterprise (desktop apps).

    There are a huge class of applications to be written that will never have more than a few to maybe a few hundred users EVER. Trying to put REST into a business application, meant to be private, not crawlable, not indexable by search engines, built “on” the web rather than “of” the web, is an exercise in idiocy. Many of these applications will be deployed on private internal networks, not the internet, the web is not the only HTTP server in town. REST has its place, but this ain’t it. Continuation based servers that make programming complex business applications vastly easier will absolutely dominate this space. In this context, pretty URLs just aren’t important at all.

  11. Jon Hanna says:

    “Just because REST is the architecture of the web as a whole, doesn’t in any way imply that the same architecture is necessarily correct for a single application. Applications and web sites are vastly different things. REST folks never seem to understand the distinction.”

    Hmm. The last thing I wrote was a piece of code that took a string from a database, did certain transformations on it, and put it back. Despite being a “REST folk” I understood that it didn’t need to be RESTful – it didn’t even have a URI!

    Now I’m back to another piece of code that that code is to be used with, which is a *web* application. It sure as hell does have to be RESTful, it’s on the web. I don’t need to deal with all or REST with it – there’s no need for me to deploy a series of one or more servers talking to one or more clients with zero or more intermediaries, since that bit’s already done, but I do need to know that’s the system I’m dealing with.

    This also doesn’t matter if it’s THE web or A web. HTTP is designed to work along RESTful principles, just like Java is designed to work with OO principles, calculators are designed to work with arithmetic principles and so on.

    I’m not entirely clear what your position is here though. First you say that since URIs are opaque (a basic REST principle) they don’t have to have any “readability”. This I agree with, but when Mike argues that readable URIs can be useful with RESTful applications, I also agree with that (though from previous discussions with Mike I don’t think I’d go as far as he does). You disagree (the attitude most commonly associated by a certain type of hard-core RESTafarian – “They’re opaque, so by gods are they going to look opaque!”), but you are also arguing that you don’t have to be RESTful when dealing with HTTP (which sounds to me much like saying you don’t have to be wet when you’re a fish).

    Reading this I see the two positions as:

    Mike: Although REST states that URIs are opaque, readable URIs are helpful in other contexts, and there’s nothing in REST that says we can’t ALSO make them readable and gain further advantages in our URIs beyond those we get from REST, despite how some read the opacity principle.

    Ramon: I want to take the one bit of REST that people read too much into with the result of URIs that have no advantages beyond being URIs, but not the rest.

    Jon: [At this point] WTF? Why am I defending REST and readable URIs to the same person, shouldn’t I be doing one at a time?

  12. Ramon Leon says:

    Hi Jon, appreciate the discussion.

    “It sure as hell does have to be RESTful, it’s on the web.
    This also doesn’t matter if it’s THE web or A web. HTTP is designed to work along RESTful principles, just like Java is designed to work with OO principles, calculators are designed to work with arithmetic principles and so on.”

    You’re forgetting server side session state. HTTP does not have to be used RESTfully. HTTP may have been designed to be used RESTfully, but cookies and server side sessions gave us the ability to get around that restriction.

    Yes, you trade scalability when using such techniques, but many times that is a trade worth making because scalability requirements are often not that high when writing applications vs writing web sites.

    So my position is, REST is a tool, use it when it’s appropriate, but recognize that continuations are a tool too,, and they are generally appropriate for a different class of problems than REST. Both are valid and necessary techniques, often within the same application.

    Point being, when using continuations bound to server side session state, you are no longer being RESTful, and that’s OK, everything doesn’t “have” to be restful. REST is best used on long term resources rather than short term session based resources.

    As for readable URLs (not a requirement of REST at all), they are certainly nice to use when you are being RESTful.

  13. Jon Hanna says:

    Session state is a bug IMO, however…

    Now, I also use session state, because I often use frameworks that use session state and *with those frameworks* session state is often an easy way to do things.

    With the human-readable web (i.e. what we see in browsers) there are also a few things (some even useful!) that can be done with sessions that couldn’t be done otherwise or in as user-friendly a manner until recently (they can all be done, and done better, with secondary XML requests or similar techniques on modern browsers).

    In a case where one is dealing with the machine-readable web I don’t see anything one can do with session state that one can’t do as well, if not better without. Further, if one doesn’t have some handy tools built on top of session state (i.e., if you want a “Session” object you’re going to have to build it for yourself and handle all of the session management) I don’t see anywhere where it even has an advantages in terms of developer time over other techniques.

    As for readable URIs (where we got into this) I think they are certainly nice to use whether you are being RESTful or not. You never know when you might have to read the things. As long as they don’t break REST (and readable URIs can lead to some bad habits that do break REST) and as long as it doesn’t lead to an unjustifiable burden upon resources (whether the resources used by the running code, or by the development process) I’d go for them.

  14. Ramon Leon says:

    Session state isn’t a bug, it’s a technique. Try as you might, state exists and interactions with the user accumulate state.

    You can carry all the state on the client, which limits you to simple serializable state, in either hidden form fields or cookies or you can store the state on the server, either in persistent storage which limits you to simple serializable state, or in memory storage which allows much more complex state like continuations. There are valid reasons for every approach, and trade-offs made when choosing any of them.

    Ajax has certainly changed things a bit and made state less necessary, but it hasn’t replaced the need for it.

    As for the machine readable web, I agree, I don’t tend to use any state there, there’s little benefit to it.

    Using sessions is an engineering trade off, sometimes it’s worth it, sometimes it isn’t. Thinking either is always the right answer, or either is always the wrong answer is dogmatic and simply wrong.

    If you think state is worthless, you haven’t tried Seaside.

  15. Hi Mike,

    I haven’t followed all of your discussion with Ramon, but suffice t to say that there are always two sides to an argument. I like REST, and I also like Seaside. Design is always about compromise and Seaside chooses to compromise on URL prettiness, and as a consequence it gains in other areas like ease of implementation.

    I must say that I’m intrigued by your interest in URLs. The web has had a significant impact, and I agree that end-user accessibility as been a big part of that. The concept of URLs is intuitive, along with the web metaphor which most people can grasp relatively easily.

    The issue for me is whether this intuitive “web of resources” metaphor is part of the webs past or whether it represents the webs future? We are going through interesting times where the future of the web is up for grabs.

    Personally, I believe that the web has hit critical mas were any future initiative would do well to build on the webs current strengths, and I agree with you that URLs are one of those strengths. Technology that tries to make the web into something that it isn’t doesn’t seem to be fairing too well. This is why I like REST.

    Having said this, HTTP may prove to be the wrong approach in the long run. For example the immersive web where we are invited to climb into a 3D virtual world, just like the movie “The Matrix”, may be just around the corner. 2nd life has shown a glimpse of what is possible here, and The Open Croquet project has shown that technically a 3D collaborative web is indeed feasible.

    The thing to bear in mind is that innovation can often mean going out on a limb and doing things differently. For an example of what I mean, take a look at the iPhone. By choosing not to confine itself to the rules of the past – apple have managed to come up with something that is a real step forward. So we need to keep the ability to “think out of the box”, which is why I wouldn’t knock Seaside for taking a different slant.

    In all, I don’t know. Personally, I don’t believe web 2.0 is all that ambitious, but I do agree that there is probably a lot of mileage in HTTP/REST, mashups and pretty URLs. Would I love the web to re-invent itself in a truly innovative way along the lines of Open Croquet? Yes. In the croquet world URLs are replaced with what the Croquet team call postcards, there are no central servers and HTTP resource requests are replaced by peer-to-peer distributed object replication. So is the 3D collaborative web revolution likely to happen anytime soon? Probably not, for a myriad of reasons, non of which are technical :^).

    I plan to be working Atlanta soon, and I would love to meet you in person. I find what you have to say really interesting. The fact that you are thinking about such things impresses me. Keep up the advocacy!

    Paul.

  16. Breton says:

    It seems I’ve missed the best part of this discussion by many months, but I’ve noticed a few glaring things that I’d like to pedantically nitpick.

    First, and most pedantic, Mike’s quote from Avi is not exactly a clear example of confirmation bias. It seems more like an “Appeal to Ignorance”. that is “There’s no evidence for X, so it must not be true” Confirmation bias is more like “There’s some evidence for X, so it must be true”.

    Second, REST is not about pretty URL’s. REST and Pretty URLs have very little to do with eachother. And while you may think you’re being clever, and subverting REST with your seaside framework, as long as you’re using HTTP, you’re using REST architecture whether you like it or not. The question here about seaside’s state model then, is not about whether it’s RESTful, but whether or not its breaking the expectations of the client application (Either a browser, or a search engine, or other client).

    That is, the worst crime of session state is that it breaks the back button, it breaks bookmarking, it breaks the user expectation of what happens when they send a friend or college a web address. Basically, session state, (and also Ajax) are a really bad idea for the same reasons that frames were a really bad idea. It may seem in the heat of the moment a really convenient way to get things done, but the end result is a broken website.

    The worst part of it is that Users, and consequently, web developers, who are also users, don’t see anything wrong with breaking the back button, or breaking bookmarks.

    REST architecture, it’s true, is not necessarily the best architecture for an application- But it is a really good architecture for designing applications that must work over a network- The constraints of REST are not there simply to annoy desktop application developers, they’re there to force you to SIMPLIFY your application’s design in such a way that it is robust against network failures, slow connections, allows proxies and caching, and other really good features that Ramon seems perfectly happy to throw away without a second thought just to make his website a little easier for him to program.

    So what we have here is a design decision that costs the users dearly by breaking their browser, harming usability, making the application perform very poorly over a network, for what? To make it a little easier for you to program? This doesn’t seem like a particularly professional attitude to me.

  17. A note on Amazon URLs. There’s a site affiliated to Amazon called books-by-isbn,com. You can visit books-by-isbn.com/ and see details on the book. The ISBN may be entered with or without hyphens separating the groups of digits (any hyphens are ignored, and their position need not conform to the logical structure of the ISBN), and may be entered in thirteen-digit or in ten-digit form. It’s quite neat.

    TRiG.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>