A little over two weeks ago, my father-in-law sent me an article entitled, Why Google doesn’t care about hiring top college graduates. The article is excellent (as is the NYT article that it’s based on), but what particularly got my attention were the things that Google’s Laszlo Bock had to say about what he calls, “intellectual humility.”
Bock’s comments prompted me to start thinking again about something that has troubled me for years: arrogance among software engineers. Happily, none of my current team members seems to suffer from this particular weakness, but that has not been the case with other teams I’ve worked on, so I’ve seen firsthand the kind of damage that can be done when influential team members have oversized egos. They hurt the team, they hurt the project, and ultimately, they hurt themselves.
Some years ago, while I was working as a senior engineer on a fairly large, geographically-distributed team, I phoned a junior engineer to see how she was doing on her current task. I’ll call her “Mary.” Mary was despondent, and did not feel confident about completing the task. I knew she was perfectly capable of completing it; she was not a rock-star programmer, but she had implemented requirements of similar or even greater complexity in the past. As we spoke, it became evident that she was feeling down about herself because of something another team member, who I’ll call “Mark,” had said about her work during a meeting.
“Mark thinks my code stinks,” Mary said.
I was angry. I had not been in the meeting, but I had already heard about Mark’s performance. And this was far from the first time that he had tried to embarrass and humiliate people by calling out their errors in front of others. This time, however, he had picked on a particularly vulnerable team member, and his comments had done damage. I did not let Mary know that I was angry. Rather than make a big deal of it, I wanted Mary to be able to shrug it off, so I responded matter-of-factly, “Mark thinks everyone’s code stinks.”
“That’s true,” she replied, and I knew it had worked. We talked a bit more, and it was clear that she was ready to put Mark’s comments in perspective and move forward with her task.
I have obviously not forgotten the incident. Mark’s callous remarks had rendered a valued team member unable to progress for several hours, and then required me to spend some of my time to get her to refocus. And this was not an isolated incident. There is simply no way to calculate the negative effect that Mark’s conduct had on the team’s morale and cohesiveness. This, of course, adversely impacted our ability to hit our targets.
Besides draining the team of enthusiasm and camaraderie, arrogant individuals hurt the projects they work on in other ways as well. For example, their high-handed approach to other members’ code often generates more bugs and drains other engineers’ time. I have seen multiple cases where a presumptuous engineer failed to understand the code, and thereby assumed that the code was poorly written and made no sense. As a result, when he (it’s usually a “he”) implements a change request or bug fix in someone else’s code, he casually dismantles the code he has deemed “stupid” and thereby introduces a new bug. Then, another engineer needs to go back in and undo the mess. As a result, two engineers’ valuable time has been wasted: first the arrogant one, who wasted time breaking something that was working just fine, and then the other one, who had to re-create what the first one had destroyed – and often had to re-code the first one’s “fix” as well.
There is much more that could be said about the damage that arrogance does to teams and projects, but I’d like to close this already-long post with some thoughts about the damage that arrogant people do to themselves. This brings us back to what Google’s Bock said about the importance of intellectual humility:
Without humility, you are unable to learn. Successful bright people rarely experience failure, and so they don’t learn how to learn from that failure. They, instead, commit the fundamental attribution error, which is if something good happens, it’s because I’m a genius. If something bad happens, it’s because someone’s an idiot or I didn’t get the resources or the market moved.
Arrogant people frequently have difficulty seeing the value in opinions or strategies that differ from their own – especially when those opinions or strategies originate with people whom they deem to be inferior. Most of us recognize that we can learn a lot from people who are junior to ourselves, as well as from our peers and superiors. By being open to the ideas of others, we learn and grow, not only professionally, but personally, in that we learn to appreciate other people more. People who are blind to this fact cheat themselves of so much.
We have a fairly complex enterprise application that is built with Flex, BlazeDS, Java, and Spring. Version 1.8 is currently in production, and we are looking forward to delivering 2.0 before the end of the year. The Flex client by itself includes over 78,000 lines of code in 1062 files. In addition, it leverages our own corporate library SWC plus several third-party SWCs (both commercial and open-source). It was built using the Adobe Flex 4.5.1 SDK. We did not move up to 4.6 because it had bugs that broke some of our more dynamic skins.
At this point the feature list is pretty stable, and so we are thinking about performance. Some of our screens require large data sets with complex interrelationships, and algorithms that run to update those relationships (and thus redraw the visualizations) with nearly every user gesture. All in all, these screens seem to be rather obvious candidates for ActionScript workers - and so it seems time to move beyond the 4.5.1 SDK.
With no small amount of trepidation, I set up a new workspace with a fresh pull from SVN targeting the new Apache Flex 4.10.0 SDK, and made a small tweak to the compiler arguments. The result? It just worked.
After building the client with the new SDK, I put it through its paces in my local development environment. Then I deployed it to our Dev/Test server, and had a QA engineer and a software engineer perform regression testing. In the end, we found a total of 0 issues.
So, all I can say is: Kudos to the Apache Flex team. You have done a fantastic job, and deserve to be proud of what you have accomplished.
The mailto handle is a standard that has been in place since the beginning of time, in Web development terms. That being the case, we might expect it to be supported by all browsers in a consistent and predictable manner. However, it turns out that at least one major browser has not managed to reliably support this standard.
I stumbled across this issue when a mailto link quit working in Chrome. A bit of research revealed that Chrome had trouble with mailto in 2011, but nothing more recent had been reported anywhere, so I posted the issue to the Chrome product forum. As of this writing, no one has offered a solution or workaround. You can see a demo of the bug here.
Chrome is a great browser for personal use, but I’m beginning to wonder whether it’s a suitable container for enterprise applications. This is something I would rather not have to think about, of course. After nearly seventeen years in Web development, it is disappointing to still be running into browser compatibility issues. But in the final analysis, when life hands you lemons, what is there to do but make lemonade?
We have a wonderful group of end users who are entirely willing to work with us in whatever way they can. Though Chrome seems to be, in general, their browser of choice, they have already volunteered to refrain from using it. If we keep encountering issues like these, it appears that I am going to have to ask them to do just that.
This will be old news for some, but Apache Flex is now a Top-Level Project of the Apache Foundation: it’s out of the incubation stage!
Also, Flex SDK 4.9 is now available, complete with a shiny new installer.
This represents the work of a lot of brilliant, dedicated, and hard-working people. It is great to see their efforts bearing such fruit.
Read the full press release here.
The previous post outlined an issue that we faced as a result of the fact that describeType() outputs more information when the SWF is compiled in debug mode than when it is compiled in release mode. This post provides a couple of possible solutions.
Solution 1: Compiler Option
The first solution was referenced in a helpful comment by Simon Gladman: the keep-as3-metadata compiler option. The ArrayElementType metadata is retained in the release SWF by adding this snippet to the compiler options:
Notice the += operator. This is to ensure that the default metadata is also retained. According to the documentation, the = operator would cause the default metadata to be replaced. I was not able to confirm this in my testing, but nonetheless += is more intuitive as well as perhaps being safer.
Here is a snapshot of the compiler option in Flash Builder:
Keep AS3 Metadata Compiler Option in Flash Builder
You can see a demo of the above solution running here. To see the difference, you can see the original version (without the compiler option) here.
Solution 2: XML
The second solution, which is the one that was implemented in our project, was to add the data type to the schema for the XML that stores the screen layouts, widget configurations, and service calls. While this solution requires slightly more maintenance, it is more robust. For example, it allows us to use our own data type descriptors, such as guid. It also provides a safeguard against arbitrary code changes; if a developer changes the data type of one of these properties in the ActionScript, the application will break as soon as the developer runs it, thereby preventing it from being deployed with an error. If were to rely on the compiler option and describeType(), and the application were to be deployed without thorough regression testing, then the error might not appear until a user tried to open a previously saved screen or widget.
So, we created a set of enums in ActionScript to define the data types that may be stored in our XML configurations, and we use these enums rather than string literals to create the XML:
Likewise, we use the enums when evaluating the data types when parsing the XML when it comes back from the server.
There are certainly other possible solutions, but either of the above will work.
My team is building an enterprise web application with a Flex UI that generates its screens at runtime based on XML files. It determines how to form its data service requests, what components to use, how to configure the components (line styles, custom data grid columns, etc.) all based on the XML, using a schema that we developed in-house. It’s a lot of fun.
The ability to quickly and easily introspect classes comes in very handy with this sort of development, since we don’t want to write or maintain a separate method for every class defined by our XML schema to instantiate itself. The native Flash.utils.describeType() is very useful here, as it gives a pretty complete description of the class in an easily traversed XML format. However, I learned that it can also be misleading.
I had created a single routine to initialize a wide range of classes from XML, and it worked perfectly - as long as it was running in debug mode. It took a while, but I found the cause of the propblem: describeType() gives more information when the SWF is compiled as a debug SWF than when it is compiled as a release SWF.
Screen shot depicting demo app compiled two ways (one as a debug and the other as a release SWF) and displayed in the same HTML wrapper. Click image to view.
It’s not too surprising that such metadata as “__go_to_definition_help” is only available for a debug-version SWF. However, I did not expect to see something as useful as ArrayElementType omitted from the release SWF.
A couple of workable solutions have been posted here.
I continue to hear from recruiters looking for senior Flex and Flash developers. The recruiters represent clients who are looking for engineers, trainers, and architects to work on new projects as well as existing applications. So perhaps you will understand when I say that I’m still not convinced that Flash is dead.
Over the years, we have seen a steady parade of “Flash Killers” appear on the scene (Safari, SVG, Canvas, Ajax, Silverlight, HTML5, etc.) Of course, Flash did not die. So it would be easy for me to be complacent and assume that the latest Flash Killer will fail to do the job, just as all of its predecessors did. It would be easy, that is, if the latest Flash Killer were anything but Adobe Systems, Inc.
Adobe may just be able to do the job. After all, they own the technology. However, I’m not certain that’s enough. It could be that Flash is, in a sense, bigger than Adobe. After all, Flash had already been ubiquitous for some years before Adobe bought Macromedia, so we can’t exactly say that Flash is Adobe’s baby. Flash has a life of its own. It has a large and vibrant developer community. Many multi-billion-dollar organizations have invested untold millions of dollars in applications built with Flash. Many, in fact, are still building applications with Flash.
This leads me to think of another technology that has been declared dead by pundits countless times since it became ubiquitous. Sun Microsystems is gone, but Java lives on. And based on the number of new Java applications being built, it would seem that Java will continue to thrive for the foreseeable future.
So, has Adobe succeeded in delivering the death blow to Flash? Well, I don’t know, and I’m guessing that you don’t, either. Time will tell. And it will be interesting and exciting either way. There are a lot of smart, creative, and innovative people doing a lot of very cool things at any given moment. I’m looking forward to discovering what will happen.
Meanwhile, Flash will continue to have a significant presence for at least the next several years. Even if Adobe has been successful in killing Flash, it won’t die quickly: It is too ubiquitous, too popular, and too good at what it does.
Here is a humorous little footnote on the discussion: I clicked on the link in Tink’s comment below to check out Lightspark, and across the bottom of the page was a cool widget promoting the HTML5 Center (a joint venture between SourceForge and MicroSoft). The widget, of course, was running in the Flash Player!
I had come to trust Gmail’s spam filter so much that I never checked the spam folder. In fact, I had not checked it in several months - maybe even a year. But this evening I saw Cliff Hall’s post about Google spam-filtering itself, and decided I had better take a look. Sure enough, there were 4 or 5 non-spam messages in there. The unsettling part of this is that Gmail automatically deletes all spam after 30 days, so if there were any legitimate messages in there from more than 1 month ago, they are now gone forever.
If you sent me a message at some time in the past and didn’t get a response, now you know why. If you are like me and had learned to trust Gmail’s spam filter, then it’s time to quit trusting it.
None of the teams I have worked with to date has fully adhered to a formal Agile methodology, but most of them have taken a highly iterative and collaborative approach to the development process. Although “agile” is a good word to describe such an approach, it seems better to avoid its use here since formal Agile methods are not particularly what I have in view. The word “responsive” seems to be a suitable alternative, in that we’re looking at approaches that are responsive to the customer.
While I firmly believe that a responsive approach is best for most types of software, I have also come to learn that it presents a number of hazards. Unfortunately, even though these hazards have been documented numerous times, I have learned to truly appreciate them the hard way: by seeing them go from potential dangers to real fiascoes. It seems worthwhile to put these lessons in writing; the exercise will help me to remember what I have learned, and perhaps it will also help someone else learn from my mistakes. And, if you respond by sharing your own insights, then I will learn even more.
So, here goes…
Change is at the heart of any responsive approach. You build a piece of software, let the customer evaluate it, and then change it in response to the customer’s feedback - and you repeat this process over and over again. But it is axiomatic that whenever you change a line of code, you risk introducing new bugs. So it is almost inevitable that your collaboration with the customer will generate new bugs.
The short development cycle is also at the heart of any responsive approach, because you need to move quickly in order to get something into the customers’ hands so that they can provide the feedback you need to continue development. Any time you are in a hurry, you increase your likelihood of making mistakes, whether you’re developing software or making an omelet. So, again, we are almost guaranteed to introduce bugs with this approach.
The need for speed also limits the time available for testing, which reduces the likelihood that our QA team will catch the newly introduced bugs before the customer sees the software. As a result, we not only increase our chances of introducing bugs, we also increase our chances of delivering the bugs to the customer.
So how do we mitigate this? I think that the best solution has three parts: one is technical, and the other two managerial.
The technical part is to write code that is flexible. Keep your classes small, well encapsulated, and loosely coupled. This will greatly enhance your ability to make changes without creating new bugs. (More on this in a later section.) This is also difficult to do under time constraints, because it is almost always faster to write brittle code. But we need to remember that the shortcuts we take now will come back to haunt us later.
The first managerial part is to give your developers ownership of application components. When an engineer feels like he or she owns a piece of functionality, then pride of ownership has an opportunity to take hold. The engineer cares about the component, and feels more responsible for it, and therefore takes more care to see that it works properly. Also, the engineers become experts on their own components, and this brings real benefits: They are less likely to deliver something that does not function properly, because they know how the components are supposed to work. And, because they know the code, they are more likely to know how to make changes in it quickly without breaking something.
The other managerial part is to regularly perform code reviews. This provides opportunity to monitor and enforce the technical part of the solution, and also keeps the first managerial part from becoming a problem. When an engineer owns a piece of functionality, and knows that no one else is going to be looking at the code, there is a temptation to be less careful about things like coding standards and naming conventions. The code is likely to reflect more of the style of the individual developer and less of the team’s accepted norms, and can end up being almost impossible for anyone but the original developer to understand. Regular code reviews can forestall this, and ensure that the code is maintainable and extensible.
Very often, when an end user is given a piece of software to evaluate, the response is something like this: “This is good, but it would be really great if it could also do X!” Such a response is entirely natural. Whenever we get a new gadget, our imaginations get sparked, and we begin to think of really cool things that the gadget almost does but not quite, and we wonder how hard it would be to make it do those really cool things.
When we as developers get this kind of feedback from our customers, our instinctive reaction may be to say, “Sure! We can make it do X!” And that instinct is good: it shows that we really want to build software that will help our customers to the greatest extent possible. However, like all of our instincts, it needs to be ruled by our reason.
In order to properly channel our “satisfy-the-customer” instinct, we need to scrutinize customer feedback and carefully evaluate anything that amounts to a request for additional functionality. If we are 100% certain that we can implement it easily, and that doing so will not affect our schedule, then we should do it without hesitation. Otherwise, we need to determine the impact, communicate the cost to the customers, and let them decide whether they want to pay for it or not. And by “pay for it” I don’t mean simply money, but time as well. Even if we are willing to make the change without charging an additional penny, the customers need to know how much longer they will need to wait to get the finished software in their hands. Then they can decide whether the additional functionality is worth the cost. They know their business better than we do, and they know what the change is worth to them, and we need to respect that. It is essential for us as developers to realize that we are not doing our customers any favor if we keep them waiting for weeks or months beyond deadline while we code additional features that they can live without.
It is easy to take for granted that the people we talk to on a regular basis understand what we are about, where we are coming from, and where we are going. However, common experience tells us that this is not necessarily the case: On the contrary, if we we want to be certain that others understand a particular fact, then we must explicitly communicate that fact to them. We can’t assume that they picked it up from our general conversation.
This is particularly important to remember in the context of responsive software development. Because our customers are also collaborators, and we tend to develop a comfortable working relationship with them, we can easily take for granted that they understand our perspective on the process. But when we allow ourselves to fall into that pattern, we risk not only the success of the current project, but our long-term relationship with the customer as well.
I have seen solid working relationships between customers and developers destroyed by the developers’ failure to properly communicate the impact of changes. On the other hand, I have seen working relationships that survived very difficult projects largely because the developers communicated the impact of changes frequently and clearly, and in so doing they effectively managed the customers’ expectations.
So, it is vital that all significant facts be clearly and explicitly communicated to the customers. Did you decide to implement some new functionality that they requested on the basis that you could do it easily without impacting the schedule? Make sure they know what you did and the reason you did it. This way they will see that you are really working for them, and at the same time it will help them to understand why some change requests cost extra while others do not. Did you agree to make a change that will impact the schedule? Make sure the customers know what that impact will be, so that they know how much of a delay to expect. They may not always like what you have to say, but they will learn to trust you.
Developing for the Customer
This is one of the more paradoxical dangers of responsive development practices. After all, the whole point of taking such an approach to development is to ensure that software not only meets the customer’s needs but actually streamlines the end users’ work flow and enables them to focus on their business rather than on operating their software. So what could be wrong with developing with the customer in mind?
Here is the danger: when you code for a specific customer, and you have taken the time to know your customer’s business needs and work flows, it is easy to code yourself into a corner by thinking such things as, “I know they always do X in this particular way.” Because you know how the customer does things, you unintentionally design your classes with built-in assumptions about how things will always be done, and you end up writing brittle code.
Keep in mind that you may end up with more customers who are interested in the application you’re building. If that happens, each new customer will need some changes. But even if you are 100% certain that you will only have one customer, remember: in the real world it is not unusual for customers to make changes to their work flows or business rules. Whether you have one customer or 100, some features will need to be omitted, others added, and some will need modification.
So, even if you are building an application that will only ever be used by one customer, guard against the complacent attitude that you know how the application will be used. Design and write the code as though you intend to market it to a broad range of organizations, and you have no idea how they will want to use your application. Bake flexibility, scalability, and extensibility into your architecture. Code to interfaces rather than concrete classes. Design and build loosely-coupled components. Maintain clean separation of concerns. Layout your UI’s in such a way that controls can be rearranged, added, and removed easily.
We all live in the real world, and seldom have the opportunity to work in anything resembling ideal conditions. We deal with absurd deadlines. We work on teams that are ridiculously understaffed. Sometimes we feel like we need to just push out the code as fast as we can, and at the end of the day our only concern is that the software actually works. We’re just doing the best we can.
But when we have time to breathe and reflect, we should make the most of it. Maybe we’ll actually come up with the ideas and the resolve that will prepare us for the next time we’re under the gun.
Very few posts on this blog are personal. This is an exception.
I have neglected this website for well over a year now. During that time a number of people have posted questions about things I had written or left comments to inform me of broken links, but I have not responded. I want you to know that I have neither been careless nor have I been intentionally rude. This post is to let you know what has been going on.
About three years ago, my wife began to exhibit some neurological symptoms. First she had one; a few months later she had another; and so forth. The doctors did not know what was going on, although in some instances they thought they knew what the causes were. They actually performed surgery for one of her symptoms - surgery which proved useless and unnecessary, because it did not address the symptom.
Finally, in the fall of last year, her primary care doctor ordered an MRI of her head, and it revealed that there was a tumor on her brain. The neurosurgeon was pretty certain that it was benign, but he was also convinced that it should be removed. He removed it last December, and my wife recovered from the surgery beautifully. Then, in January of this year, we received the biopsy report: it was malignant.
My wife underwent six-plus weeks of radiation therapy, and this took us into April. Since that time, while recovering from the effects of the radiation, she has been researching ways to help ensure that the tumor does not come back. As a result of this research, she has been able to help herself greatly through nutrition and exercise.
In October she had her first follow-up MRI, and it was fantastic. Not only is there no sign of any recurrence, there is hardly any sign that there was ever a tumor there in the first place. Even the doctors were impressed. Needless to say we rejoiced and offered many thanks to God for that good news.
Earlier this month, my wife and I celebrated our 20th anniversary. We went to Vermont (where we spent our honeymoon) and stayed two wonderful nights at a lovely Bed & Breakfast.
We are thankful for all the blessings of the last 20 years, including the good that has resulted from the difficulties we faced. and we are looking forward to the next 20 years. We also feel that we have good reason to hope for a healthy 2011.
So, I think I am at a point where I can pay attention to this website again. My first task will be to fix broken links, and then I plan to resume writing. I hope that I will be able to contribute some useful things to the community in the future.
Thanks for “listening.”