Working With Internet Explo{d,r}er 9

Lately I’ve been working with Web Components in my spare time and recently I was moved to the Engineering group at 3Pillar. At work I have been challenged by the limitations of Internet Explorer 9 (IE9). Many of our clients still support browsers like IE9 and much of the corporate and government world use browsers that have been locked down to certain versions either because of the Hacker News propaganda and paranoia with security/privacy issues in browsers or just because ‘Big Brother’ said so. In fact, the corporate world seems to prefer the ‘security’ & ‘privacy’ of Internet Explorer. Many features in browsers that are not named ‘Internet Explorer’ are disabled and system administrators will sometimes lock down everything ‘just to be sure’. Internet Explorer itself gets locked down as well. Perhaps it is the “clear” internet options that integrate so well in their ‘secure’ windows environment.

Now that I am part of the Engineering group I have client work that has a real need to work with older browsers as opposed to the cutting edge prototype work I was doing previously with the User Experience group. It has been a while since I had to break out my cross-browser bug smashing mind and I was very excited that I did not have to worry about IE9’s cleverly named predecessor – Internet Explorer 8.

Some of the pain I encountered in IE8 has been extreme and frustrating to say the least. At first I compared it to IE7 and justified the abuse much like a battered victim would. After all, it supported CSS data-uri’s, I had no more float or double margin issues, hasLayout worked a bit better….I am sure there is more but that’s about the sum of it. IE8, like it’s father before him, had extreme issues with lazy loading and script performance. Internet Explorer 8 and older versions have been using JScript, which is a variant of JavaScript that is non-standard and proprietary to Microsoft. IE9 is the first of the IE browser family to use standard JavaScript.

Internet Explorer has had a history of CSS issues solved with expressions filter rules. These work but are highly toxic to performance. Rendering times are significantly higher when using these, so don’t if you can help it.  The most obnoxious piece of IE8 (and IE7) with regards to CSS are it’s request and file size limitations.

Internet Explorer 7

  • Maximum Stylesheet Size Limit: 288kb per file
  • Maximum number of CSS Stylesheets:  30 files.

Internet Explorer 8

  • Maximum Stylesheet Size Limit: 288kb per file
  • Maximum number of CSS Stylesheets:  30 files.

Internet Explorer 9

  • Maximum Stylesheet Size Limit: not tested yet
  • Maximum number of CSS Stylesheets:  30 files.

Internet Explorer 10

  • Maximum Stylesheet Size Limit: not tested yet
  • Maximum number of CSS Stylesheets:  not tested yet

Other limitations that are less significant with regards to CSS are the rule counts, import counts, and import nesting levels. While these numbers may seem unreachable – consider the misuse of CSS Preprocessors on a team of 20-30. It has happened before and hopefully we are now writing CSS in a style like OOCSS or SMACSS to avoid this.
Internet Explorer 6-9

  • A sheet may contain up to 4095 rules
  • A sheet may @import up to 31 sheets
  • @import nesting supports up to 4 levels deep

Internet Explorer 10

  • A sheet may contain up to 65534 rules
  • A document may use up to 4095 stylesheets
  • @import nesting is limited to 4095 levels (due to the 4095 stylesheet limit)

The worst part about these limitations is that no errors or information is sent to the users. In fact, when Internet Explorer parses a CSS file and reaches its maximum kilobyte size, it just stops parsing it. The request is still a 200 but suddenly some CSS rules are missing. When minifying and concatenating files, as current best practice would probably warrant, this becomes a debugging nightmare unless you know that these limitations exist.

IE9 has full support for SVG (Scalable Vector Graphics)! Prior to IE9, the browser used it’s own variant of SVG called VML. Charting libraries had limited support for VML so this is great that SVG is finally native in IE. IE9 also has full CSS support for Media Queries but that only includes CSS rules. Keep in mind that the JavaScript “window.matchMedia” method does not work. IE9 does not support “Element.classList” which is frustrating if you don’t want to use JQuery. IE9 does have partial support for Viewport Units which is helpful but not as flexible as we need. Pointer Events require a polyfill as they are not supported as well. Combine that with bugging CSS Appearance rules and styling form elements still sucks really bad. As far as layout goes, I have had no luck getting Flexbox polyfills to work in IE9 so “display: table” or floats seem the only semi-sane way for go. IE8 and IE9 have issues with cross origin resource sharing & CSP as well. The typical scenario occurs with icon fonts from Google Fonts are not loaded but cross domain issues are not limited to this particular case. IE8 & IE9 get buggy partial support for this using ‘XDomainRequest’ and the majority of polyfills require you have access to the origin server…gee, that’s useful.

To Sum up…IE9 IS the new IE8. Like IE8, it adds a huge amount of development overhead when developing for rich and performant applications. Consider burying it alive if you can. Here are some tests to visualize some of the CSS issues.

LifeCycle Callbacks in Custom Elements

I was going through the HTML5 Google Community posts a few days ago and saw Max Waterman had asked a question about  Custom Elements. I thought I could easily answer this one, so I did. Interestingly enough I learned a bit more as I answered the question. When I was going through code in the console to test I noticed a few things I hadn’t before.

// namespace for custom elements
var customElements = {};

// create a JavaScript object based on HTMLElement
var registeredElementProto = Object.create(HTMLElement);

//add a function to registeredElementProto
registeredElementProto.foo = function(){ return 'foo'; };

//add a property to registeredElementProto
registeredElementProto.bar  = 'bar';

//add a createdCallback
registeredElementProto.createdCallback = function(newBar){
this.bar = (newBar || this.bar);
console.log(this.foo() + this.bar);
}

// add our element to the DOM registry
customElements.fooBar = document.registerElement('foo-bar', { prototype: registeredElementProto });

Alright, this looks good so now I will add a instance of my element to the DOM using it’s constructor. Should be simple enough, right?

var myConstructor = new customElements.fooBar("Baz");
constructor error from custom elements

constructor error from custom elements

I spin the wheel and I can hear the sound of Pat Sayjak laughing at me as the spinner lands on a ‘Bankrupt’ space. What just happened here? Let’s take a look at the Lifecycle Callbacks a DOM Element. These are functions that are fired internally by native code but can optionally be redefined.

HTML Rocks description of Custom Elements

HTML Rocks description of Custom Elements

One thing that was unclear to me before was whether or not you can override these callbacks to take parameters from the element constructor.  Keeping in mind that “createdCallback” is not the actual element constructor I would guess that it is most likely just a simple callback fired by the containing element constructor. I would hope something similar to this might be the case.

//...
constructor (arg1, arg2,...){
// main constructor logic
this.createdCallback.apply(this, arguments);
}
//...

If I update the first code example so that I inspect the arguments of a redefined ‘createdCallback’ I see that no arguments exist. Notice I am adding an argument reference as well in ‘fooBarOne’ and no arguments in ‘fooBarTwo’.

// ...

//add a createdCallback
registeredElementProto.createdCallback = function(newBar){
console.log("args",  arguments, newBar);
}
// ...

var fooBarOne = customElements.fooBar('myArg');
//-> outputs TypeError: This constructor should be called without arguments.

var fooBarTwo = customElements.fooBar();
//-> outputs to console "args [] undefined"

I can see now that ‘fooBarOne’ outputs a type error “TypeError: This constructor should be called without arguments.” and as I would expect in ‘fooBarTwo’ the output in the console is an empty arguments object and the ‘newBar’ argument is ‘undefined’ even though I tried to override in the callback signature. Based on this, I would bet that when we define custom elements, the native code constructor will not recognize any arguments passed in and therefore even if redefining my callbacks with parameters and my assumption that callbacks would apply constructor arguments, the constructor never allows arguments to pass into the HTML Element anyway so nothing like this could work.

I guess it makes sense though since custom elements can also be constructed using ‘document.createElement’ which takes only only parameter which is the name of the element which happens automatically when calling “new ElementName();”. I’m a bit naive to browser development but I would guess browser developers would have to either redefine ‘document.createElement‘ and HTML Element constructors AND make it all backwards compatible, or they would have to further clutter DOM manipulation with a new function ‘document.createCustomElement’ & change the internals in the JavaScript ‘new’ constructor for HTML Elements as well.

Concierge UX : Part 3 of 3

This Black Friday I decided to do some research online in an effort to find a digital equivalent to the brick and mortar Apple Store Genius Bar experience. In Concierge UX : Part 1 I described my experiences in navigating through Apple’s website and the challenges I faced on that Black Friday. Concierge UX : Part 2 of this blog series focused on personas and who might use a Genius Bar Experience online and what their needs would be.  To conclude this series I wanted to compare a few other online support experiences I looked at, a summary of the overall experience I ended up with on Apple’s website and what I was hoping it would become.

The Answer Desk

Microsoft now has an experience in their retail stores known as the ‘Answer Desk’. Immediately after finding the landing page for the Answer Desk online, I was able to quickly contact a live person in just a few steps but the experience was still not even closely comparable to the experience of visiting a physical Apple or Microsoft retail store.

The Microsoft Store Answer Desk

Figure A) Microsofts attempt at a online Genius Bar experience

A Bucketed Services Approach

It was promising to see an option to chat with a live person but scrolling down I noticed a vague short list of broad service offerings and pricing. How did the Answer Desk already know what my needs were? I may have not even known what my problems really were. How could I, or anyone for that matter, place a price tag on a technical solution without even understanding the problem? This brings me back to the IT online support techniques of old, where you would get asked questions like “Is the computer on?” or “Have you rebooted yet?” to further define the problem. In 2000, this seemed like a logical approach to go through a series of questions based on previous issues IT support encountered.  Those questions are no less important, but it is almost 2014. The approach, the questions, and how we understand technical problems has changed so much since then. Today, with the vast amount of new devices, varied operating systems, and extremely diverse user needs it is no longer viable to just drop every problem into a limited set of technical solutions.

Answer Desk Offerings

Figure B) Answer Desk Services were very generic and defied the actual purpose that the Answer Desk aims to serve

That was a red flag to me that this experience was not going to be as face-to-face and humanistic as I would like. A concierge experience is not about a rigid limiting set of services but is about fixing what you need when you need it so you don’t have to worry or stress about the little things. Whether that means helping synchronize your personal calendar on your iPhone with your work calendar in Outlook or removing viruses on your PC, a Concierge is ‘in your corner’ and is invested in you, the person, and not necessarily invested in you, the client.

The Contradiction

A Good Definition of Concierge UX

Figure C) Microsoft Answer Desk seems to well define the general goals of Concierge UX

What was interesting about Microsoft’s case was the set of defined qualities of ‘Greatness’ for the Answer Desk, which were located directly below their technical solutions offerings. These qualities were well aligned with what would be considered good Concierge UX. Clearly, the bucketed technical support solutions directly contradicted the technical solution idea. (See Figure C). This could be seen in the text for the last section in the “What makes the Answer Desk Great”, which states “We know not all tech issues are the same”. If this is what makes The Answer Desk truly great then why had Microsoft ‘bucketed’ their services in such limiting topics? Every problem would need to fall in one of those buckets and be given the same value because of its predefined cost estimation.

Covering Their Bottoms

By saying that the Answer Desk would “customize our training and support services to meet your unique needs”  felt like legal speak to me. The Answer Desk would definitely solve your problem, provided it fell into one of their ‘buckets’.

Generalizing the Bucket Approach

I knew, from a hands on experience at other companies, that with this type ‘bucket service’ model a ‘customized’ solution most likely meant the IT Support team would use internal canned virus and spam elimination tools. Those tools ran generic tests and fixed the problems as they were detected – Automatically. Very rarely would you have a personalized approach in a scenario like this. In many cases the real problems were never solved to level they needed to be and the customers ended up returning for repeat business.

Establishing Trust

Regardless on my opinions on ‘bucketed services’, I was much more concerned with how the live chat would be. I was rather happy for a few reasons in starting this part of the Answer Desk experience. First, I was able to chat in a fraction of the steps it took me on Apple’s site. Second, a live feed of Answer Desk support people was in front of me on the page with complete ratings, a profile photo, and  skills they held (see Figure D). The skills were limited by the main service bucket but still it was a human that I could now have some visual recognition with as well as some establishment of trust (whether this is illusionary or not the the psychology of this still holds value).

The People at the Answer Desk

Figure E) Here is a more human view of online support and the people at the Answer Desk

Chatting with The Answer Desk

Now I could choose who I wanted to wait for or I could just chat with the next available person. I chatted with one of the Answer Desk People very quickly (maybe a 2 minute wait) and immediately asked about video conferencing or more personalized methods of collaborative help. Once a chat was started I did have a link where the Answer Desk person would call me instead of chatting. This was a nice option to have but I opted to stay on the chat anyway. Compared to the online chat with Apple (read about that interaction in part 1), I felt even more disconnected here in this chat. I had to wait for responses with no indication that the technician was still there.

My Answer Desk Chat

Figure F) The beginnings of my Answer Desk Chat

The Same IT Support Service With A Different Dress

I explained to my Answer Desk Technician that I was a developer looking for online support solutions at a Concierge UX level for a client. My questions were answered in an even more canned manner than that of my experience with the Apple Support Person (View Part for details on this). For instance, when I asked ‘yes’ or ‘no’ questions there was no conversational text as Apple had provided when answering these questions. When I asked an open ended question such as “Can you talk to me about your video conferencing support options at the Answer Desk?”, the delayed answer was “Yes we have video conferencing but it is a paid service”.  I had to ask several questions before I could stitch his answers together before I was able to get a clear understanding of my options.  Additionally I pressed for more public material or marketing that I could later review with my client. That was when the inevitable IT “This-is-not-my-department” answer came (See Figure G).

my conversation

That really showed me this was a true call center with a detailed script and no personalization on the level I required. When thinking of Concierge UX, this is a feeling of betrayal of trust to me. My technician had his own or The Answer Desk’s agenda in mind here. More importantly, he was not in ‘my corner’ and not on MY agenda.

History Repeats Itself

Remember the big box company CompUSA? I used to work at one in their tech support store offering. They took a similar but less refined approach that BestBuy and Microsoft are doing now. Customers would bring in their machines, fill out the paperwork, take valuable time out of their day while waiting for a tech to fix their problems. Sometimes problems took days or weeks to fix. You, the client, would be charged a good amount of money for a limited set of services. This service hardly ever got good feedback or reviews from the clients. Where is the giant CompUSA now? Well…they had fallen into bankruptcy at one point and a few retail stores still exist I believe but not a fraction of what they once had. Eventually they were consumed by TigerDirect , a major online retailer. My point is that this approach in the short term may have worked for them but look how successful it was in the long run.

The Beginnings of Concierge Innovation: Enter Mayday!

Mayday was the latest Amazon support service I had looked into. I didn’t get to personally experience this but I did find this video from the Android Authority (as seen above) where they ran through the service and its features. This video was not as glamorous as Amazon’s marketing video but you got a much better idea of what the first hand experience might look like. Keep in mind that this is just the beginning for Mayday, and as of writing this, the service is only available on one Kindle device. Additionally, a few things stuck out to me.

Very Fast Turnaround

Amazon boasts a 15 second turnout to get assistance

One Way Video

Something I had not clearly understood from the marketing video I watched from Amazon was that the person assisting you at Amazon cannot see you or your surroundings. This is so key to establishing trust and personal security boundaries. Imagine a Mayday attendant sees you have valuables on your fireplace mantle, notices the lock types on the windows, what the layout of your home is. Call me paranoid but that could end badly.

Personalized Guidance

Having done a good amount of remote pair programming I saw great value in having both parties able to manipulate and view the screen I am looking at. The fact that I, the user, can still control my device but so can my attendant on the other end at Amazon makes learning fast and easy. If I am in a hurry and I don’t want to learn (some people are silly like that) I can ask my Mayday assistant to take over.

High Level of Established Trust

Although Mayday attendants have a set of rules and a script, they need to look at the screen & maintain their attention on you or your screen. Now that you have a face to their name they know you will remember this conversation. I am sure these are recorded for Amazons internal use as well.

Holistic View of the Experience

I would imagine, and hope, that because of these intertwined dependencies that the Amazon Mayday technical assistance will evolve into a full fledged Concierge User Experience. This would be an experience that not only helps with your Kindle but now leads to much more outside of the Mayday video conference. It may start with questions like “How do I connect my iPhone to my Kindle to transfer files?”. Maybe now we are thinking bigger and are having broader conversations like “I need tickets to a baseball game in New York next month. Amazon friend, can you find whats available with me and help me order them?”. At that point we are talking about collaborative task management and device integration but it can be much more than that. This line of questioning shows clear examples of how Mayday could grow but what if we had this as an open platform and we took it a step further?

Back to Apple’s Genius Bar

The more I navigated through the maze of Apple’s online support experience, the more I realized I was not going to find the same experience. I didn’t even come close. The technology is here and available today so why on earth would Apple not want to scale this experience? Now that I have tested many of my previous assumptions of what I thought I might experience online I am left with more assumptions and less answers. Sure, I know what I enjoyed from the in store experience and I know a few ways how I might translate and scale that digitally but why haven’t smarter minds than me at Apple figured this out?

Has Apple Secretly Tried this Already?

My revelation is that Apple had already figured this out a long time ago. Whether it is for internal political reasons, poor ROI estimations of such a product, or some other reason, it has purposefully been buried under the the global support system in Apple. Not only that but it is an experience that only points you to a visit to your local Apple store or customer service via phone or chat.

Why Not FaceTime integration?

Interestingly enough, from my conversation with the Apple customer service rep, I forged a hypothesis that the online Genius Bar experience I desired was non-existent. In fact the only video support available is obtained when you call customer support on the phone and specifically request video support with them. Then you would have to use Safari and login to some web application and supply they would provide a URL. To have a FaceTime video or any video conference to the degree I would have expected from a concierge service was not possible.

Technology & Video Conferencing

With live video conferencing you get the auditory inflections & nuances along with the visual facial cues of being right next to someone. Of course you don’t have the smells with video but I could be persuaded to do without that. Video and audio can also be choppy, dependent on connection speeds, sounds get muffled, and direct eye contact sometimes get lost depending on where the camera is.

First Generation Innovation Leaders

A Mayday-like experience would be the closest experience one would have online right now that would be the closest to an in-person visit to the Apple Store Genius Bar. Amazon seemed to get this concept and they were way ahead of Apple on the digital front. I kept wondering if Apple was purposeful avoided a digital translation of their Genius Services? Perhaps internal politics? Maybe it was because of the technical problems video conferences tend to suffer from? Perhaps it was a staff issue related to scaling or quality assurance consistency. It was clear that Amazon had broken ground with the Mayday online experience and Apple was dragging its feet.

High Expectations?

Perhaps I had expected too much from Apple, or a Concierge UX online service in general, as I envisioned it. I did have very high expectations compared to the solutions I encountered. A concierge experience like the Genius Bar would inevitably evolve online. The online customer support space had been in need of something like this for a long time.

Amazon & Apple

Apple had mastered its rigorous recruitment criteria for Geniuses. They had solidified the marketing, training, and branding needed to attract & retain quality people at the retail store level but nothing exceptional was being done online. Amazon had no need for what Apple had done in their retail stores because of the obvious amount of their revenue that comes from their online presence. Amazon had stolen the show in the digital service space. What if Apple and Amazon combined forces to create a consolidated online experience integrated with a brick & mortar retail? Imagine what these two powerhouses could accomplish together.

The Endless Possibilities

Innovation Today

Now that we have established the online experiences of Amazon & Apple, let’s think about how to take those experiences one step further. Recently I read about the Microsoft Surface Blades. These are the Surface accessories that physically snap into and extend the interaction of a Surface tablet (Read the article here). Blades are no longer just keyboard input devices. They are now custom physical extensions for workflow and daily routines.

It is now apparent that product development professionals are not only thinking “outside the box” but now “outside the app” as well. These types of innovations are opportunities not just in technology but also for normalizing user experience. Thinking ‘outside the app’  extends the concepts of Concierge UX to a whole new level.

One Possible Future

Imagine visiting your doctor through a Mayday like experience. Rather than travel, you could visit a private space nearby,  turn on your tablet and touch your ‘Doctor Contact’ button. Now you would be immersed in a one-on-one human experience without having to take time to go into the doctor’s office. Some questions come to mind like ‘How would the doctor check my general health, my reflexes, my blood levels, or even perform a procedure?” What’s to say the medical tech community couldn’t build out physical objects that integrate with the medical systems through your personal device. Objects like that could become typical household items like first aid kits are today.

If you needed a simple blood analysis – not problem. You just put this ‘arm-analyzer-device-thing’ to your arm and it will transfer the data to the doctors systems through your tablet. After analyzing the blood, they review the results and send a prescription to your local pharmacy of choice.

Maybe your other ‘muscle-analyzer-thing’ the doctor asked you to put on your back could be used for muscle and bone analysis.  It could determine you needed a custom back brace to a certain specification. No problem, your physician just sends your 3d-printer what it needs to know based on the data they received and you don’t even have to leave your home to get it. Sounds like science fiction but its not as far away as we think.

The Persecution of JavaScript

JavaScript’s history is a colored one of misuse and misunderstanding. It has evolved into a platform that, as of today, is arguably the most approachable, portable, and versatile programming language available. A former colleague of mine recently showed me an article claiming 10 reasons why you should not use JavaScript. JavaScript, like every language, has its strengths and weaknesses, but what is interesting about the article shared with me is the naivety behind it. Comparing JavaScript to languages like Ruby, Go, and Java is like comparing a hammer to a power drill. Each tool is very useful but each is better suited to solve certain types of problems. Developers have learned from their own experiences how to approach certain problems with their language of choice. What we forget sometimes is that we must not assume another language approaches this same problem with the same tools, patterns, or paradigms. That said, let’s look at some claims from the article in question.

The Assumptions

JavaScript hurts your mobile visitors

I am assuming this refers to performance and usability based on a comparison of HTML5 web applications to native applications. When running web applications on mobile devices they are restricted by a few factors.The rendering pipeline limitations of the device, the speed of DOM interactions and degree of GPU acceleration all contribute to performance. JavaScript, like many languages, can always be optimized for mobile performance but is by no means even close to being a major performance problem. I can agree that less lines of code per request helps to “lighten the load” but to say the Phone Gap article bashing JQuery Mobile is referring to all JavaScript is just not true. In fact, as the article title suggests, the post specifically refers to JQuery Mobile.

Many projects optimize web applications for progressive enhancement. One example is SouthStreet By Filament Group. They have taken several problems they see in modern web applications, especially in mobile, and have proposed solutions to these. As an example of one SouthStreet component is PictureFill . At a very high level it enables you to download smaller assets for less capable devices instead of the desktop high resolution versions through markup and JavaScript. This is a way JavaScript actually helps with progressive enhancement. Also, I recommend taking a look at Jake Archibald’s article on Progressive Enhancement. Jake explains “Progressive enhancement has never been about users who’ve turned JavaScript off, or least it wasn’t for me.”. He further details approaches to what he feels true progressive enhancement should be.

JavaScript hurts your robustness

The post talks about “a single bug in your JavaScript can break the functionality” . This is why we, as developers, use linters and testing. If you are pushing code that you haven’t tested in your application’s supported browsers then that is not the fault of JavaScript. Additionally the post argues that using JavaScript MVC frameworks couples your application and prevents testing future browser implementations of JavaScript. Projects like Chrome Canary and Firefox Aurora are extremely helpful if you want to test your code now in future implementations of the browser. This is also great because if you find a bug and submit it it will most likely be fixed by the time your users see it.  One problem I do see with developers often is that they don’t test in their application’s least powerful target browser throughout the development time. That means bugs are not found early enough in the process.  Testing early and often in at least the worst browser on the list will save so much pain in the long run. This is not a JavaScript issue.

JavaScript hurts your security (and your privacy)

We have all heard the horror stories of the evil hacker sitting in an abandoned shack somewhere in Portland hacking into our servers and sucking out our souls from across the the nine hells of the internet. Although JavaScript can be minified and obfuscated, a clever hacker will get your code if it’s there in the browser. While I agree that client side JavaScript has vulnerabilities in this area I do not agree that this should be the responsibility of the JavaScript language. Each browser allows JavaScript to access quite a bit of features. Without that authorization, many features of our applications we run in the browser would not be possible.

The fallacy in cyber security is that we are not protected because of JavaScript. It is true that a simple Google search will yield results that detail standard attack vectors using XSS and XSRF. Anyone can find step-by-step instructions and training videos on YouTube these days. Fear not, XSS and XSRF ARE preventable attacks. You can put countermeasures into your application on the backend to prevent this from hurting your users. As a general rule of thumb, don’t expose sensitive data in the browser where JavaScript has access. Server Side JavaScript has many more security options so storing sensitive data there is not an issue. Your backend, which might be JavaScript should always assume the browser can’t be trusted in these cases. On a related note – the real cyber threat is not JavaScript, SQL Injection Attacks, or even code hacks in general; It’s Human Hacking and the art of Social Engineering.

JavaScript hurts your SEO

Front End Developers seem to have a love for clean markup (I know I do). If you code your HTML semantically, separate your concerns, and use accessibility standards you can progressively enhance your page for robust functionality while maintaining SEO. Section 508 has been due for an overhaul but still yields some great practices that can optimize SEO. WAI-Aria attributes and patterns can progressively enhance your page for robust functionality while maintaining SEO standards.The big concern sited in the post was AJAX. Making your site functional without AJAX is sometimes an option but not always. This is not a JavaScript problem either as the language was not designed for SEO purposes nor claims to assist or hinder it.

JavaScript hurts your development time

Browser tools for Internet Explorer have come a long way. Modern.ie has a lot of testing tools for Internet Explorer. The new F12 tools are no Firebug or webkit debugger but they are getting there. Browser tools and even server side tools for Node.js like DTrace and TraceGl are extremely helpful. Using a linter and checking target browsers in the Continuous Integration and in development is extremely important. Unit testing and End To End testing for JavaScript is a necessity when building massive applications. Would you leave out testing in the backend of the application? Development time optimization comes with experience no matter what platform you are on.

JavaScript hurts your testing costs

If I understood this section correctly, “maintenance costs of testing” and relying on HTML implementations of browsers are the issues causing this ‘pain’. Testing is an quality investment, whether it is on the data layer, back end, or the front end. If you are testing the right functionality in your code and you are communicating what is being tested then testing is maintainable. This is a difficult and subjective way of thinking, especially in large teams where everyone has a different style if testing and coding. That said, testing code is a discipline and an irreplaceable tool to ensure better quality software. This is true not just for JavaScript but for all programming languages. As for setup of these frameworks you most likely need a browser matrix that you are targeting for your code so I get it that there is that dependency. Given that you are creating tools and applications with JavaScript that will run on those targets then it is no different than server side testing requiring environment and system dependencies. End-To-End testing is similar to integration testing on the server side. Whether you are writing unit tests or integration tests,  we have bad tests and good ones. JavaScript is not an exception in this case but not at the fault either.

JavaScript hurts your website performance

Nothing hurts more than watching a client scream at an application you built that loads in 6 or more seconds. Premature JavaScript performance modifications are not necessary. If you are wealthy enough ( or have a client that is) to use a tool like New Relic, you may find that the page is waiting on the server from some crazy SQL call or a service end point that’s slow. Just because the browser takes longer to load a page doesn’t mean it’s the JavaScript that is at fault. When it determined a performance issue is on the client side, it’s usually bad coding practices. Restricting the amount of DOM elements, using spriting techniques, concatenating static assets to avoid vast amounts of server call are a few major contributors to better performance on the front end. Tools like JQuery optimize this and abstract out browser specific implementations but overuse of the DOM is not recommended. This is a failing in how the DOM is implemented across browsers, not in JavaScript.

JavaScript hurts your software investment

JavaScript is almost 20 years old now. It is now a proven platform for major application across the world. Applets and Flash require a plugin and are not the same as JavaScript. JavaScript is native to the browser.You cannot compare Applets to JavaScript. “Client-side technology is doomed to fail” is very presumptuous. I am not sure why you would assume JavaScript is doomed to fail since it has been widely adopted by large companies likes Google, Microsoft, Facebook, etc. I can’t even think of a company that has not desired to build a web application without any use of JavaScript.

JavaScript hurts your software architecture

First off I love MooTools but it is a abstraction of JavaScript. Underneath MooTools is plain old vanilla JavaScript. It is a bad idea to assume that backend developers understand how JavaScript works. Respectively, it is bad to assume JavaScript programmers understand how the server-side backend works and its complexities. JavaScript is flexible enough that it can emulate many styles of development. There is not a right or wrong way to code in JavaScript, only style and tradeoffs. If you don’t understand its paradigms and approaches to coding you should research and learn it.

JavaScript is not needed

This one I actually agree with this to a degree in some cases. For accessibility and Mobile First approaches we have a choice to to approach the application MVP with a semantic HTML version that has a minimal script tag to detect capabilities and provide richer experience in richer device targets. The rich functionality we see in applications is from JavaScript (and CSS) these days. Creating GMail without JavaScript is probably a bad idea. There are still things that can only be done on the client side. JavaScript is just another tool in the toolbox and should be used to create great software.

JavaScript is not perfect and it is definitely not the end of the road for application development. It is a flexible tool akin to a swiss army knife. It is powerful, portable, and versatile. and it has many problems it can solve and many ways to do them.