tag:blogger.com,1999:blog-33021027533244253762024-02-19T17:42:54.903-08:0051 ElliotNotes from Silicon Valley NorthDarrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.comBlogger271125tag:blogger.com,1999:blog-3302102753324425376.post-67030319832387869322018-05-02T06:44:00.002-07:002018-05-02T06:46:52.711-07:00Productivity and Note-takingI told a friend of mine that I wasn't really happy with the amount of time that gets taken up by Slack and "communication and scheduling" in general, and that on one particularly "noisy" week it had taken up around 7 hours. He said that was "hardly anything" and that most of his work was done via Slack. He has a support role, so I guess that makes sense. So it definitely can be a useful tool, but in the past few weeks I've managed to keep Slack usage down to about 3 hours per week.<br />
<br />
On a related topic, I've tried a bunch of different task management techniques over the years and none of them ever stuck. I've always ended up with a fragmented collection of things to do, scattered between various Notes apps, email-based task lists, Trello boards, and hand-written notes. The problem with a lot of the software-based task management options for me is that they're not always in front of me and they take a conscious effort to open and use.<br />
<br />
A notebook on the other hand is always on the desk beside me, usually open to the last page of notes. There's no effort getting to it, and I can easily glance over without disrupting whatever I was doing on the computer.<br />
<br />
There's a system called a bullet journal for keeping and managing lists. The website <a href="http://bulletjournal.com/">bulletjournal.com</a> explains the system and has an online store with their BuJo journals that are designed to work with the bullet journal system. There's also an app, so for people who really want to get into it, there are a variety of ways. You don't need a special BuJo journal to start using the technique, however.<br />
<br />
I've switched my list of tasks from an online document to my pen-and-paper bullet journal now. I'm not fastidious about keeping the journal up to date on a daily or even weekly basis, but I find that its just a lot easier to take notes this way than it was to type things into a digital form. The only downside I can see is that it's harder to share than digital notes which can be copy and pasted. But frankly most of the tasks on my list are too detailed and boring for anybody else to care about. They just want to know how a feature is coming along, etc. So, check out the bullet journal.Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com1tag:blogger.com,1999:blog-3302102753324425376.post-35959584972142642532018-03-26T07:22:00.003-07:002018-05-02T06:45:35.302-07:00Slack is a major productivity drainFor the past few years I've been using RescueTime time management software. It helps keep track of the different activities I've spent time on during the work week and stay focused on things that contribute to my productivity. Another application that's become a big part of many workplaces is Slack. We use it for communication in my current team. A lot of times, it's really helpful. You can fire off a quick message and get a quick reply without really interrupting your main task. A lot of times you can ask a question, get valuable opinions from everyone concerned, and come up with a consensus that works for everybody, in a way that would've been impossible with traditional email and meetings.<br />
<br />
Recently though I was shocked to see that Slack accounted for a full 7 hours of a recent work week. We've been doing a lot of planning and discussion for new feature work, and also working through some issues related to customer documentation, deployments, and things that are not specifically coding-related, so it's understandable that there's been more planning and discussion activity than usual, and less actual code-writing. But 7 hours! My gosh. That is a real time sink.<br />
<br />
This was a wake-up call into something that I already intuitively knew, that Slack - as helpful as instant messaging can be - has the potential to be a black-hole where productivity gets sucked into a terminal death-spiral.<br />
<br />
The challenge is how to reign in this Slack tyranny... if you don't monitor the conversations going on in various Slack channels, you're liable to miss out on vital information. A lot of people seem to think that broadcasting a message out on a Slack channel is "job complete" when it comes to communication - a surrogate for the old-school email. That's not really the case; messages easily get lost in the backlog of noisy, run-on chatroom conversations.<br />
<br />
I'm going to be keeping an eye on Slack usage and trying to figure the best way to keep it from sucking up a large percentage of my productive working hours without losing the benefit of instant communication with the broader team. Maybe just being respectful of people's time and being a little less cavalier about using Slack to post casual commentary, understanding that Slack can definitely become a drain on my own and other people's time, and approaching it as a tool to be used with a certain level of professional self-restraint, might be a start.Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-57953044343504070382018-02-28T06:39:00.002-08:002018-02-28T06:39:56.215-08:00REST API Best Practices 5: Further ReadingSince I started writing on REST API Best Practices there have been some interesting new developments. Going forward we'll take a look at some of them, covering things like documenting APIs, how to define relationships between different resources, and various tools that - while not specifically REST-related - are useful for working with JSON as a data interchange format.<br />
<br />
In the meantime, here's a list of articles that provide more information on a lot of the concepts that were outlined in the first four posts on REST API Best Practices. No doubt a lot more has been written about REST APIs in the last few years, but I think these resources are a pretty good window into some of the original sources that shaped the best practices used for REST API design today.<br />
<br />
If you want to suggest other articles please feel free to comment below (note that comments are moderated and won't appear immediately).<br />
<br />
Tutorials<br />
<a href="http://www.restapitutorial.com/">http://www.restapitutorial.com/</a><br />
<a href="http://obeautifulcode.com/API/Learn-REST-In-18-Slides/">http://obeautifulcode.com/API/Learn-REST-In-18-Slides/</a><br />
<br />
General best practices<br />
<a href="http://www.restapitutorial.com/">http://www.restapitutorial.com/</a><br />
<a href="https://zapier.com/learn/apis/">https://zapier.com/learn/apis/</a><br />
<a href="https://s3.amazonaws.com/tfpearsonecollege/bestpractices/RESTful+Best+Practices.pdf">https://s3.amazonaws.com/tfpearsonecollege/bestpractices/RESTful+Best+Practices.pdf</a><br />
<a href="http://apigee.com/about/api-best-practices">http://apigee.com/about/api-best-practices</a><br />
<a href="http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api">http://www.vinaysahni.com/best-practices-for-a-pragmatic-restful-api</a><br />
<a href="http://www.slideshare.net/mario_cardinal/best-practices-for-designing-pragmatic-restful-api">http://www.slideshare.net/mario_cardinal/best-practices-for-designing-pragmatic-restful-api</a><br />
<a href="http://apigee.com/about/api-best-practices/restful-api-design-second-edition">http://apigee.com/about/api-best-practices/restful-api-design-second-edition</a><br />
<a href="http://devproconnections.com/web-development/restful-api-development-best-practices">http://devproconnections.com/web-development/restful-api-development-best-practices</a><br />
<a href="http://www.restapitutorial.com/">http://www.restapitutorial.com/</a><br />
<a href="http://madhatted.com/2013/3/19/suggested-rest-api-practices">http://madhatted.com/2013/3/19/suggested-rest-api-practices</a><br />
<br />
HATEOAS<br />
<a href="http://restcookbook.com/Basics/hateoas/">http://restcookbook.com/Basics/hateoas/</a><br />
<a href="http://timelessrepo.com/haters-gonna-hateoas">http://timelessrepo.com/haters-gonna-hateoas</a><br />
<br />
HAL<br />
<a href="https://en.wikipedia.org/wiki/Hypertext_Application_Language">https://en.wikipedia.org/wiki/Hypertext_Application_Language</a><br />
<br />
Documentation best practices<br />
<a href="http://bocoup.com/weblog/documenting-your-api/">http://bocoup.com/weblog/documenting-your-api/</a><br />
<br />
Partial updates:<br />
<a href="http://stackoverflow.com/questions/232041/how-to-submit-restful-partial-updates">http://stackoverflow.com/questions/232041/how-to-submit-restful-partial-updates</a><br />
<a href="http://restful-api-design.readthedocs.org/en/latest/methods.htmlhttp://www.wekeroad.com/2012/02/28/someone-save-us-from-rest/">http://restful-api-design.readthedocs.org/en/latest/methods.html</a><br />
<br />
Misc<br />
<a href="http://www.wekeroad.com/2012/02/28/someone-save-us-from-rest/">http://www.wekeroad.com/2012/02/28/someone-save-us-from-rest/</a><br />
<a href="http://docs.couchdb.org/en/latest/api/basics.html#api-basics">http://docs.couchdb.org/en/latest/api/basics.html#api-basics</a><br />
<br />
Auth<br />
<a href="http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/">http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/</a><br />
<a href="http://en.wikipedia.org/wiki/OAuth">http://en.wikipedia.org/wiki/OAuth</a><br />
<a href="http://restcookbook.com/Basics/loggingin/">http://restcookbook.com/Basics/loggingin/</a><br />
<a href="http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html">http://broadcast.oreilly.com/2009/12/principles-for-standardized-rest-authentication.html</a><br />
<a href="http://stackoverflow.com/questions/630538/designing-a-web-api-how-to-authenticate">http://stackoverflow.com/questions/630538/designing-a-web-api-how-to-authenticate</a><br />
<a href="http://en.wikipedia.org/wiki/Session_hijacking#Methods">http://en.wikipedia.org/wiki/Session_hijacking#Methods</a><br />
<a href="http://apiux.com/2013/03/21/authentication-dont-be-clever/">http://apiux.com/2013/03/21/authentication-dont-be-clever/</a><br />
<a href="https://developer.github.com/v3/auth/">https://developer.github.com/v3/auth/</a><br />
<a href="https://github.com/blog/1509-personal-api-tokens">https://github.com/blog/1509-personal-api-tokens</a><br />
<a href="http://stackoverflow.com/questions/7999295/rest-api-authentication">http://stackoverflow.com/questions/7999295/rest-api-authentication</a><br />
<a href="https://www.google.com/search?client=ubuntu&channel=fs&q=api+authentication&ie=utf-8&oe=utf-8">https://www.google.com/search?client=ubuntu&channel=fs&q=api+authentication&ie=utf-8&oe=utf-8</a>Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-63808937973706343112015-12-13T17:52:00.000-08:002015-12-13T18:00:15.436-08:00More Node.JS Module PatternsThe post on <a href="http://51elliot.blogspot.ca/2012/01/simple-intro-to-nodejs-module-scope.html" target="_blank">Node.JS module patterns</a> and the <a href="http://51elliot.blogspot.com/2013/12/nodejs-module-patterns-using-simple.html" target="_blank">slideshow</a> from the talk I did at OttawaJS keep getting a lot of mentions. Those are really simple examples, but in practice most modules have a bit more substance to them.<br />
<br />
There are a couple of patterns that I've seen used in Express.JS apps a lot. One simply exports a bunch of functions that are used as route handlers. Another passes an object to the module, and the module attaches things to it. And the last one exports an Express router object that the main app can use to define more specific routes on a base URL.<br />
<br />
Let's look at these with some simple examples.<br />
<br />
<h3>
Exporting route handler functions</h3>
This is a fairly common pattern where the module simply exports a number of functions. In this case they're route handler functions that an Express app can use to handle the various routes it declares.<br />
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">users.js</span></b><br />
<br />
<pre><code>exports.getUser = function (req, res, next) {
res.send('respond with a user');
};
exports.updateUser = function (req, res, next) {
res.send('update user and respond');
};
</code></pre>
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">app.js</span></b><br />
<br />
<pre><code>var users = require('users.js');
app.get('/user/:userid', users.getUser);
app.put('/user/:userid', users.updateUser);
</code></pre>
<br />
<h3>
Passing in and enhancing an object</h3>
This pattern has been used in some Express example apps and shows how you can pass variables into a module, either to use them in the module or to "enhance" an object by attaching things onto it. Of course you have to know what you're doing and not have two different modules that try to do the same thing.<br />
<br />
<span style="font-family: "courier new" , "courier" , monospace;"><b>users.js</b></span><br />
<pre><code>module.exports = function (app) {
app.get('/user/:userid', function (req, res, next) {
res.send('respond with a user');
});
};
</code></pre>
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">app.js</span></b><br />
<br />
<pre><code>var users = require('users.js')(app);
</code></pre>
<br />
<h3>
Express router modules</h3>
The most recent version of <span style="font-family: "courier new" , "courier" , monospace;">express-generator</span> creates a sample app using this pattern. Route modules are created under the <span style="font-family: "courier new" , "courier" , monospace;">routes/</span> directory, and they require an instance of the Express router. They add their routes to the router instance, and set <span style="font-family: "courier new" , "courier" , monospace;">module.exports</span> as the router object. The main app.js file then attaches these router objects via <span style="font-family: "courier new" , "courier" , monospace;">app.use('/path', router)</span> as you can see below. It's a nice clean way to organize route modules in Express.<br />
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">users.js</span></b><br />
<br />
<pre><code>var express = require('express');
var router = express.Router();
router.get('/:userid', function (req, res, next) {
res.send('respond with a user');
});
module.exports = router;</code></pre>
<br />
<b><span style="font-family: "courier new" , "courier" , monospace;">app.js</span></b><br />
<pre><code>var express = require('express');
var users = require('users.js');
var app = express();
app.use('/users', users);
</code></pre>
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com1tag:blogger.com,1999:blog-3302102753324425376.post-71068638235113214902015-11-13T21:39:00.002-08:002015-12-13T16:53:51.801-08:00Reacting to ReactWhile I'm on the blog here, I figured I'd take a minute to write my thoughts on React / React Native.<br />
<br />
Recently I've had occasion to play with React Native a bit. This post is not so much about first impressions, though, but about my perspective and some preconceptions about things like React in general.<br />
<br />
I run a big JavaScript meetup group called OttawaJS. I get to see lots of interesting presentations on the latest and greatest technology, and sometimes give talks myself. Over the past few years I've seen an absolute deluge of new frameworks and libraries for web and mobile development, and like a lot of people in the tech community, have suffered from "new framework fatigue". Perhaps this is one reason why, when React came around, I didn't get immediately excited about it. Beyond that, here are some general observations, personal biases, and preconceptions.<br />
<br />
1. When Mark Zuckerberg famously said "HTML5 isn't ready", many people felt they hadn't given it a fair shake. The developers at Sencha proved the point by building an HTML5 clone of the Facebook app that outperformed the native one. So Facebook hasn't been a big proponent of using the web stack for mobile apps, historically. Although I think there are some really good ideas underlying React, having a large company behind something doesn't mean it's the right solution for everybody.<br />
<br />
2. I'm a little wary of frameworks built and promoted by large companies. Enterprises don't usually build open source frameworks without some benefit to themselves, and having more developers on a project tends to add complexity. The frameworks and tools that I usually prefer, and the ones that have generally proven most successful over time, are often written by a single author out of personal interest, and have slowly built up a following.<br />
<br />
3. There seemed to be a lot of marketing behind React, and that can be a bad sign. It had barely been introduced and there were conferences about it and a flood of videos and articles. Paul Graham said something similar about Java a long time ago. About how good languages and frameworks don't need to be marketed, and how anything with a big marketing engine behind it just smells funny.<br />
<br />
4. There are some cool functional programming concepts in React but they're mixed in with the classical object model. This kinda points to confusion about what the designers thought they were trying to build.<br />
<br />
5. Generating HTML programmatically actually does suck. Other languages and framework have tried this, and they also sucked. I think Ember's Glimmer engine has taken an approach that minimizes DOM updates for great performance, without re-inventing the way HTML is written.<br />
<br />
6. React seems to have done a lot to make people aware of cool things like immutability and components. And that's really great. However I think those concepts have value outside of React and can be applied with smaller, bespoke libraries like Redux and Riot.js.<br />
<br />
7. My gut feeling is that it just doesn't feel like React is the holy grail of component-oriented web frameworks. It brings some cool concepts to the table and has helped shift developers' focus towards interesting alternatives to two-way binding and REST to thinks like one-way data flow and GraphQL. But I think web components and the native web stack are heading in the right direction and will ultimately replace a lot of what client-side MVC frameworks currently do.<br />
<br />
So that's why I've told people I'm not betting on React becoming the defacto standard way to build web/mobile apps. At best, I think some of it's most useful concepts will be extracted, cleaned up and incorporated in smaller standalone implementations.<br />
<br />
KCJS keep calm and javascript. :-)<br />
<br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-45336012280468706672015-11-13T20:54:00.000-08:002016-04-19T03:30:01.331-07:00Successful Software ProjectsSuccessful software projects don't happen by accident. A recent post by Jeffrey Ventrella on <a href="https://ventrellathing.wordpress.com/2013/06/18/the-case-for-slow-programming/" rel="nofollow" target="_blank">Slow Programming</a>
resonates with developers who've been through enough projects to know
the pros and cons of the old top-down design approaches versus the
popular new "iterative development" and agile approaches. The author wrote about his
team's fast, iterative development process:<br>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
<div style="padding-left: 30px;">
<em>At
the job, we were encouraged to work in the same codebase, as if it were
a big cauldron of soup, and if we all just kept stirring it
continuously and vigorously, a fully-formed thing of wonder would
emerge. It did not.</em></div>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
The <em>fallacy of fungible engineers</em>
- viewing engineers as resources that can be assigned interchangeably
to various projects, is at the heart of the problem. Unleashing a bunch
of developers on a project to "vigorously stir the cauldron" without a solid high-level design is clearly not a great idea. Iterative development may be
a more realistic approach to software development, but you can't do
away with up-front planning altogether. As Ventrella says, "You can't
wish away the design process."<br>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
So
is iterative development a bad thing? I don't think so. It's
important to note that new paradigms are usually reactionary - they
introduce a different approach to fix the failings of the old one. Out
of necessity, they're usually <em>overly</em> reactionary. The new
approach condemns the old practices in order to get people to change,
but it often fails to recognize that underneath there's probably a core
of ideas still worth considering.<br>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
There
are valuable aspects of both the "slow programming" approach and the
"iterative" approach. It may not be necessary to define all the detailed
requirements up front, but projects can benefit from asking a few
simple questions to lay the groundwork before development starts. These
are things like the optimal technology stack for your problem space,
documentation, and what you want your testing to look like. Too often,
these decisions are done in an ad hoc manner, late in the development
cycle, or simply ignored altogether. I feel that addressing these
questions <em>at the start</em> of project is important for success. Here they are, roughly in the order of importance, as I see them.<br>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
<ul>
<li><strong>Design</strong>:
design is often the worst part of software products created by large
enterprises. Almost no thought goes into it; a product gets built
starting with the data model; layers of code are built up on top of it
until the data is finally vomited onto the user's screen in a great pile of clutter. Good design is not part of the corporate DNA of a
large company usually. This makes visual
design an issue of low hanging fruit for smaller competitors, who can
quite easily win over potential clients with the usability and elegance
of their product, even if it lacks the feature set of a more mature
offering. Conversely, great design can't be done by a non-technical designer without a deep technical understanding of the product; it's a
mistake to view design as a separate activity that can take place in
isolation from the technical implementation. Graphic designers are not great at understanding the technical details of a project, and
sometimes force engineers into making poor compromises for the sake of
preserving a look and feel. Engineers are generally bad at user
interface design, unless they also have a background and strong
aptitude in the visual arts. Good engineers engineers who
are also artists are rare. Unless you have one of them, you'll
need a design / engineer team who can work in close collaboration to
define the initial product vision. It is not going to magically emerge
from a set of requirements from Product Level Managers. Ask yourself: Is
there a strong vision for the aesthetic sense of your product? Is it
informed by a strong technical understanding as well as a good aesthetic
sensibility? Can it be demonstrated that the design decisions are
grounded in good usability guidelines and not just the whim of a
non-technical "graphic designer"? Are you leveraging existing
best-in-class frameworks for UI design? Can you point to the examples
that inspire the aesthetic vision of the product? Do you know what you
want your product to <em>look like</em>? Is it <em>beautiful</em>? Does it inspire people to want to use it?</li>
<li><strong>Testability</strong>:
how do you want to be able to test your code? Do you want unit tests?
End to end functional testing? What do you want the testing process to
look like? Will it be the same in the developer environment and the
build environment? What tools will you consider? Do you want continuous
integration? What's considered bad enough to break the build? Do you
value test coverage reporting? How much test coverage is enough?</li>
<li><strong>Documentation</strong>:
it can make or break a project and often it is left as an afterthought,
to be completed by technical writers who have little knowledge of how the low level works. Can you make it
"self documenting"? How? What do you want the documentation to look
like? In what formats will it be available? Will you have a developer's
guide? An install guide? A user guide? How will you keep it up to date?
How will you know if it's out of date?</li>
<li><strong>Surface area</strong>:
how do you define the points at which clients interface with your
product (it's surface area)? A lot of this boils down to good API design. What are the best
practices that you wish to implement? Which existing products /
services exemplify what you want to achieve? What are the existing
successful APIs, trends, industry best practices you want to emulate.
Against what do you measure the usability of your API? Who are the
current best-in-class leaders and how do you build upon their examples?</li>
<li><strong>Technology stack</strong>:
Quite often the default in large enterprises is Java. This is
unfortunate because Java is rarely if ever the best technological
solution to a particular problem - it's just the one that large numbers
of programmers happen know. Think outside the box - one
developer with a killer tool-set is better than a team of programmers with poor tools. Have you checked out ThoughtWorks Technology
Radar? How do you place bets on which technologies will succeed going
forward? Do you have a strong sense of picking technology winners that
comes from being in the trenches? Or are you a manager trying to make a safe choice? Are your choices based on cargo cult or bandwagon tendencies? Does your framework of choice have a track record of solid
bets, critical mass of users, a core team with a clear focus and
commitment to keeping things small - aka. the "do one thing well"
philosophy? Will you ask for permission or for forgiveness? As a lead
developer, are you willing to tell your managers that the technology and
technical design decisions are ultimately your decisions to make?</li>
<li><strong>Tooling</strong>:
will you standardize on set of tools or let developers use whatever ad
hoc collection of tools they prefer? Will you utilize linters, and
static code analyzers like PMD, Findbugs, and Flow? Where in the
development process where they be used - in the developer environment by
triggering these tools as each file is edited? Will you trigger build
failures after committing flawed code? Will you depend on a monolithic
IDE like Eclipse or use a lightweight code editor like Sublime?</li>
<li><strong>Code organization / architecture</strong>:
The default is the Big Ball of Mud design pattern. Are you
a devotee of Object Oriented design? Are you enlightened by Functional
Programming? Prefer an Abstraction Oriented architecture? What is your
high level view of how it all fits together with a view towards
delegating responsibilities? How to you define the boundaries between
components? Can you specify, in draft form, an APIs to act as a
'contract' between components that can then be delegated off to
different developers? How will you communicate your architectural vision
and get buy-in from team members?</li>
<li><strong>Prototype</strong>:
have you built a proof of concept? Will you? How? Do you intend to
productize the prototype, or is it meant to serve only as a reference
for what you want the final product to be like? If you plan to evolve
your prototype into a production application, do you have a clear path
for doing so? For example, have you steered clear of design decisions in
the prototype that will be difficult to back out of, like introducing
dependencies on libraries or techniques that you won't be able to
release in a production environment? As a
real-world example: we created a mobile security demo using a relatively new technology
that allowed rapid development, but made sure the API specification was
well defined, knowing that it would not be difficult to re-implement the
API in any server side language we chose.</li>
<li><strong>Delegation</strong>:
how can the project can be split up into sensible areas of
responsibility as it grows? If the project has started out with one
full-stack developer, will it be easy to carve off the API, UI, mobile
app, etc. into standalone projects that another developer or team can
take ownership of? A lot of this comes down to having well defined and
documented APIs as a contract between components.</li>
<li><strong>Security</strong>:
Its not enough to trust that your product is going to be secure because
it uses Framework X. Upon inspection, our initial subscriber-facing
web interface turned out to be susceptible to session hijacking,
cross-site scripting, and SQL insertion, and security issues continued
to be present despite being overseen by senior developers. Security
audit tools revealed even more weaknesses. What tools will you use to
audit the security of your product (Skipfish, Nessus, OpenVAS, etc.)?
Will you carry out manual penetration testing on your product? Will
security audits be performed automatically, on a regular basis? Will
they be integrated into your build environment or QA environment?</li>
</ul>
<div style="min-height: 8pt; padding: 0px;">
<br></div>
The questions you
need to ask may vary, but the important thing is to have a plan,
communicate the plan well, empower people to own their part of it, make
the design objectives clear (including all of the points above), and
make sure everyone knows how their piece fits into the overall product
vision.Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-24263689292662078282015-01-22T19:12:00.000-08:002015-01-27T07:47:52.490-08:00Fluent 2014 Talk Summaries 3<h3>
"Speed, Performance and Human Perception"</h3>
<br />
In this performance-related talk, Ilya Grigorik explains that performance is not only a function of speed, but of meeting user expectations in a way that allows them to complete tasks effectively, with insightful examples.<br />
<br />
[<a href="http://youtu.be/7ubJzEi3HuA?list=PL055Epbe6d5bab7rZ3i83OtMmD-d9uq2K" target="_blank">video</a>]<br />
<br />
My rating: 4/5. Provides useful insight on usability.<br />
<br />
<h3>
"Delivering the Goods"</h3>
<br />
Paul Irish discusses optimization in this keynote talk, describing the "critical path" and how requests impact page load times. Chrome developer tools are used to explain page load sequences and timing. Recommendations include: eliminate render-blocking JS; minimize render-blocking CSS; serve content in the original HTML response and use gzip compression. Google Page Speed Test is a tool to automatically recommend such optimizations.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="//www.youtube.com/embed/R8W_6xWphtw" width="560"></iframe>
[<a href="http://youtu.be/R8W_6xWphtw?list=PL055Epbe6d5bab7rZ3i83OtMmD-d9uq2K" target="_blank">video</a>]<br />
<br />
My rating: 5/5. Beneficial recommendations for all developers of web-based software.<br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-45381640036888423792015-01-20T20:10:00.001-08:002015-02-03T11:56:01.786-08:00Fluent 2014 Talk Summaries, continued<h3>
"Reading, Writing, Arithmetic... and JavaScript?"</h3>
<br />
Pamela Fox, the person behind the JavaScript-based programming curriculum at Khan Academy, discusses how age affects a person's ability to learn programming and says that most 13 year olds are capable of learning basic programming skills that will help them explore other fields like art, history and language. Visit <a href="https://www.khanacademy.org/computing/computer-programming" target="_blank">Khan Academy</a> for more information on their Computer Programming curriculum.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://www.khanacademy.org/computing/computer-programming" target="_blank"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0TTiD9KUFLHBgvWiaOiIGt68WeuOo598lufEAjwa26J-3pQcnaWZTdUL04C8-QDbwgiyhxCttJIZgucmYQ2Ck3QR45e8mD2_YeFjAAeAcIPmlWkCDvNYW3hygeyclYWT22Rt2fEJPEgY/s1600/kacp.jpeg" /></a></div>
<h3>
</h3>
<br />
My rating: 3/5. Introduces a great educational resource for young people. Plus, I've been a fan of Pamela's work ever since I read her articles on JavaScript widgets at Coursera.<br />
<br />
[<a href="https://www.youtube.com/watch?v=aiFOurKwy7M" target="_blank">video</a>]<br />
<br />
<h3>
</h3>
<h3>
"Virtual Machines, JavaScript and Assembler"</h3>
<br />
Popular podcaster Scott Hanselman delivers a humorous talk describing how the basic features of an operating system exist in both the cloud and the browser.<br />
<br />
My rating: 3/5. Entertaining for programmers, though light on technical take-aways.<br />
<br />
[<a href="https://www.youtube.com/watch?v=UzyoT4DziQ4" target="_blank">video</a>]<br />
<br />
<h3>
"The Humble Border-Radius"</h3>
<br />
In this talk Lea Verou demonstrates how border radii can be used to create a variety of shapes and animations, as well as upcoming CSS specifications for different corner shapes. While interesting, I think what web designers really want is complete flexibility to design arbitrary shapes, and border-radius seems like a very awkward mechanism for accomplishing that.<br />
<br />
My rating: 3/5. Interesting CSS hacks.<br />
<br />
[<a href="https://www.youtube.com/watch?v=JSaMl2OKjfQ" target="_blank">video</a>]<br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-167368538494963662015-01-19T18:46:00.001-08:002015-01-20T20:26:49.737-08:00Fluent Talk Summary: Brendan Eich, "JavaScript: the High and Low Roads"In this lighthearted talk, Brendan Eich (inventor of JavaScript) discusses high and low-level improvements in upcoming versions of JavaScript.
Acknowledging that Web development is hard ("like having a chain-saw in place of a hand"), Eich says the upcoming version of JavaScript, ES6 "Harmony", will address many difficulties facing web developers, with improvements like a built-in module system, observable objects, and many other features that will make "transpilers" and other "syntactic sugar" libraries unnecessary.<br />
<br />
The future ES7 version of JavaScript aims to bring many low-level improvements for high-performance and scientific computing, including new value objects, operator overloading, and SIMD intrinsics.
New value objects proposed in ES7 include: <span style="font-family: Courier New, Courier, monospace;">int64</span>, <span style="font-family: Courier New, Courier, monospace;">uint64</span>, <span style="font-family: Courier New, Courier, monospace;">int32x4</span> and <span style="font-family: Courier New, Courier, monospace;">int32x8</span> (SIMD), <span style="font-family: Courier New, Courier, monospace;">float32</span> (useful for GPUs), <span style="font-family: Courier New, Courier, monospace;">float32x4</span> and <span style="font-family: Courier New, Courier, monospace;">float32x8</span> (SIMD), <span style="font-family: Courier New, Courier, monospace;">bignum</span>, <span style="font-family: Courier New, Courier, monospace;">decimal</span>, <span style="font-family: Courier New, Courier, monospace;">rational</span>, and <span style="font-family: Courier New, Courier, monospace;">complex</span>. ES7 will also introduce operator overloading and new literals like <span style="font-family: Courier New, Courier, monospace;">0L</span> (int64), <span style="font-family: Courier New, Courier, monospace;">0UL</span> (uint64), etc. Support for SIMD (Single Instruction, Multiple Data) will lead to native performance for applications like game programming and signal processing. To demonstrate further, Eich reveals the first-ever public demo of the Unreal Engine 4 from Epic Games, showing stunning 3D graphics running at a full 60 frames per second, with no plugins, in a Firefox build with special SIMD capabilities.<br />
<br />
Guided by the <a href="https://extensiblewebmanifesto.org/" target="_blank">Extensible Web Manifesto</a>, the high road of developer-friendly features and the low road of safe, low-level language improvements will converge in a virtual machine that offers native performance while being very developer friendly.<br />
<br />
In a nutshell - "Always bet on JS!"<br />
<br />
My rating: 4/5. Informative, insightful and entertaining for programmers.<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="315" src="//www.youtube.com/embed/aZqhRICne_M?list=PL055Epbe6d5bab7rZ3i83OtMmD-d9uq2K" width="560"></iframe>Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-25592345562687345732015-01-18T19:50:00.001-08:002015-09-30T19:11:21.146-07:00Dynamic Types FTWThere's a belief among some programmers, particularly the OO classical inheritance folks, that static typing is a bulwark of security against all kinds of disasters that may happen in your code. They feel that the compiler will ensure their programs work correctly, by generating errors and warnings and refusing to compile their program until all the mistakes are fixed.<br />
<br />
Some developers mistakenly speak in terms of "strong typing" and "weak typing". Put that way, who wouldn't prefer "strong" over "weak" typing? In reality the terms "strong typing" and "weak typing" are undefined in the language of computer science. They're misnomers that people sometimes use to talk about strict (explicit) type definition with static type checking, versus dynamic (runtime) type checking with implicit type conversion.<br />
<br />
It's unfortunate that this line of reasoning causes some people to avoid languages that offer dynamic types. Dynamic types can be very useful, and static typing actually offers very little in the way of ensuring program correctness. I finally sat down and watched Eric Elliott's presentation "<a href="https://www.youtube.com/watch?v=_kXiH1Yiemw" target="_blank">Static Types are Overrated: The Dynamic Duo - Loose Types and Object Extension</a>" and was treated to a bang-on description why static types aren't all they're cracked up to be. Eric does a good job of explaining and clarifying things about JavaScript and programming in general that I intuitively believed but had a difficult time putting into words or backing up with solid examples.<br />
<br />
I borrowed a bunch of points from Eric's talk when I spoke at a recent Ottawa JavaScript meetup. I was presenting on Flow, a static type checker for JavaScript, but I wanted people to see it as something that could be added to their regular code linting process rather than something to enforce a statically typed programming style. Here are some of them.<br />
<ul>
<li>It's a myth that dynamic loosely typed languages like JavaScript aren't suitable for building large, complex applications. There are dozens of examples from Facebook to 37signals, Dow Jones, Adobe, Flickr, LinkedIn, Walmart, etc).</li>
<li>Type correctness does not guarantee program correctness. This is one of the biggest cargo-cult myths out there: thinking that static, strict type checking will save you. It won't.</li>
<li>When languages lack dynamic types, people fake it with ugly hacks - void pointers, variadic functions, abusing array accessors, type-casting of all sorts, generics and ugly template classes, to name a few. I don't know any C or Java programmers who haven't used and abused at least a few of these techniques.</li>
<li>"Any sufficiently advanced C / Fortran [Java, etc] program contains an ad hoc, informally specified, bug-ridden, slow implementation of half of common Lisp" - Greenspun's Tenth Rule. (There are no rules 1 through 9, by the way. Greenspun's Rules of Programming start and end with number 10.)</li>
<li>Older statically typed languages are now introducing dynamic types because they really are useful. Objective-C added them a long time ago; more recently C++11 has added dynamic types, Java has "generics", and libraries like cppscript and boost provided ways to do dynamic types in earlier versions of C++.</li>
<li>Functional programming benefits from functions that can operate on any type which implements its requirements (for example, arguments have a valueOf method). Otherwise known as duck typing. This permits things like map, filter, forEach etc. to be used generically.</li>
<li>Correctness can only really be assured through proper unit and integration testing. </li>
<li>Code that is well organized in small, simple modules, linted, unit tested, peer reviewed, and integration tested, is very unlikely to contain type errors. </li>
</ul>
<br />
The last thing I would add is that while static types are over-rated, static code analysis tools like JSLint, Tern.js and Flow are mostly underrated and underutilized. If you use something like Flow or Tern.js to provide hints and insights into potential type mismatches, the chances of type errors become essentially zero. Ideally they should be as up-front as possible, i.e. integrated directly into your code editor. I wrote a <a href="https://packagecontrol.io/packages/JSLint" target="_blank">JSLint plugin</a> for <a href="http://www.sublimetext.com/" target="_blank">SublimeText</a> and recently wrote a <a href="https://packagecontrol.io/packages/Flow" target="_blank">Flow plugin</a> as well. They provide immediate static code analysis warnings as I edit my code, so I can fix problems before I commit anything to the source code repository.<br />
<br />
<div style="text-align: center;">
<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/_kXiH1Yiemw" width="480"></iframe>
</div>
Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com2tag:blogger.com,1999:blog-3302102753324425376.post-62706251126604994552014-11-12T07:52:00.002-08:002014-11-17T10:30:51.385-08:00Three Simple Rules for Escaping Callback HellA lot of newcomers to Node.JS complain about "callback hell" and the "pyramid of doom" when they're getting started with the callback-driven continuation passing style. It's confusing, and a lot of people reach for an async / flow-control module right away. Many people have settled on using Promises, a solution that brings some unfortunate problems along with it (performance, error-hiding anti-patterns, and illusory behavior, for example).<br />
<br />
I prefer using some simple best practices for working with callbacks to keep my code clean and organized. These techniques don't require adding any extra modules to your code base, won't slow your program down, don't introduce error-hiding anti-patterns, and don't convey a false impression of synchronous execution. Best of all, they result in code that is actually more readable and concise, and once you see how simple they are, you might want to use them, too.<br />
<br />
Here they are:<br />
<ol>
<li>use named functions for callbacks</li>
<li>nest functions when you need to capture (enclose) variable scope</li>
<li>use return when invoking the callback</li>
</ol>
<h3>
The Pyramid of Doom</h3>
Here's a contrived example that uses typical node.js callbacks with (err, result) arguments. It's a mess of nested functions: the so-called Pyramid of Doom. It keeps indenting, layer upon smothering layer, until it unwinds in a great cascading spasm of parenthesis, braces and semi-colons.<br />
<br />
<script src="https://gist.github.com/darrenderidder/a4ccbd463146a254b22b.js?file=callbacks1.js"></script>
<br />
<h3>
Named Callbacks</h3>
The Pyramid of Doom is often shown as a reason to use Promises, but most async libraries -- including and especially Promises -- don't really solve this nesting problem. We don't end up with deeply nested code like this because something is wrong with JavaScript. We get it because people write bad, messy code. Named callbacks solve this problem, very simply. Andrew Kelley wrote about this on his blog a while ago ("<a href="http://andrewkelley.me/post/js-callback-organization.html" target="_blank">JavaScript Callbacks are Pretty Okay</a>"). It's a great post with some simple ways of taming "callback hell" that get skipped over by a lot of node newcomers.<br />
<br />
Here's the above example re-written using named callback functions. Instead of a Russian doll of anonymous functions, every function that takes a callback is passed the <i>name</i> of the callback function to use. The callback function is defined immediately afterwards, greatly improving readability.<br />
<br />
<script src="https://gist.github.com/darrenderidder/a4ccbd463146a254b22b.js?file=callbacks2.js"></script>
<br />
<h3>
Nest Only for Scope</h3>
We can do even better. Notice that two functions, <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">sendGreeting</span> and <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">showResult</span>, are still nested inside of the <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">getGreeting</span> function. Nested "inner" functions create a closure that encloses the callback function's own local variable scope, plus the variable scope of the function its nested inside of. These nested callbacks can access variables from higher up the call stack. In our example, both <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">sendGreeting</span> and <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">showResult</span> use variables that were created earlier in the <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">getGreeting</span> function. They can access these variable from <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">getGreeting</span>, because they're nested inside <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">getGreeting</span> and thus, enclose its variable scope.<br />
<br />
A lot of times this is totally unnecessary. You only need to nest functions if you need to refer to variables in the scope of the caller from within the callback function. Otherwise, simply put named functions on the same level as the caller. In our example, variables can be shared by moving them to the top-level scope of the <span style="background-color: #eeeeee; font-family: Courier New, Courier, monospace;">greet</span> function. Then, we can put all our named functions on the same level. No more nesting and indentation!<br />
<br />
<script src="https://gist.github.com/darrenderidder/a4ccbd463146a254b22b.js?file=callbacks3.js"></script>
<br />
<h3>
Return when invoking a Callback</h3>
The last point to improve readability is more a stylistic preference, but if you make a habit of always returning from an error-handling clause, you can further minimize your code. In direct-style programming where function calls are meant to return a value, common wisdom says that returning from an if clause like this is bad practice that can lead to errors. With continuation-passing style, however, explicitly returning when you invoke the callback ensures that you don't accidentally execute additional code in the calling function after the callback has been invoked. For that reason, many node developers consider it best practice. In trivial functions, it can improve readability by eliminating the else clause, and it is used by a number of popular JavaScript modules. I find a pragmatic approach is to return from error handling clauses or other conditional if/else clauses, but sometimes leave off the explicit <span style="background-color: #eeeeee;"><span style="font-family: "Courier New",Courier,monospace;">return</span></span> on the last line in the function, in the interest of less code and better readability. Here's the updated example:<br />
<br />
<script src="https://gist.github.com/darrenderidder/a4ccbd463146a254b22b.js?file=callbacks4.js"></script>
Compare this example with the Pyramid of Doom at the beginning of the post. I think you'll agree that these simple rules result in cleaner, more readable code and provide a great escape from the Callback Hell we started out with.<br />
<div>
<br /></div>
<div>
Good luck and have fun!</div>
Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com2tag:blogger.com,1999:blog-3302102753324425376.post-41525006754065009422014-10-13T16:14:00.000-07:002014-11-11T14:27:24.696-08:00How Wolves Change RiversA beautifully filmed short video from Yellowstone National Park that reminds us of the importance of wildlife for the health of the whole planet:<br />
<br />
<iframe allowfullscreen="" frameborder="0" height="281" mozallowfullscreen="" src="//player.vimeo.com/video/86466357?title=0&byline=0&portrait=0" webkitallowfullscreen="" width="500"></iframe> <br />
<a href="http://vimeo.com/86466357">How Wolves Change Rivers</a> from <a href="http://vimeo.com/thesustainableman">Sustainable Man</a> on <a href="https://vimeo.com/">Vimeo</a>.Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-56738088544456555812014-09-17T19:02:00.003-07:002014-11-11T20:35:38.922-08:00doxli - a help utility for node modules on command lineQuite often I fire up the node REPL and pull in some modules I've written to use on the command line. Unfortunately I often forget the exact way to call the various functions in those modules (there are a lot) and end up doing something like <span style="font-family: "Courier New",Courier,monospace;">foo.dosomething.toString()</span> to see the source code and recall the function signature.<br />
<br />
In the interest of making code as "self-documenting" as possible, I wrote a small utility that uses <a href="https://github.com/visionmedia/dox" target="_blank">dox</a> to provide help for modules on the command line. It adds a help() function to a module's exported methods so you can get the dox / jsdoc comments for the function on the command line.<br />
<br />
So now <span style="font-family: "Courier New",Courier,monospace;">foo.dosomething.help()</span> will return the description, parameters, examples and so on for the method based on the documentation in the comments. <br />
<br />
It's still a bit of a work in progress, but it works nicely - provided you actually document your modules with jsdoc-style comments.<br />
<br />
All the info is here: <a href="https://www.npmjs.org/package/doxli" target="_blank">https://www.npmjs.org/package/doxli</a>Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-68999832300237963412014-09-07T08:32:00.000-07:002014-09-17T19:47:14.951-07:00REST API Best Practices 4: Collections, Resources and IdentifiersOther articles in this series:<br />
<ol>
<li><a href="http://51elliot.blogspot.com/2014/03/rest-api-best-practices-rest-cheat-sheet.html">REST API Best Practices: A REST Cheat Sheet</a></li>
<li><a href="http://51elliot.blogspot.com/2014/04/rest-api-best-practices-http-and-crud.html">REST API Best Practices: HTTP and CRUD</a></li>
<li><a href="http://51elliot.blogspot.com/2014/05/rest-api-best-practices-3-partial.html">REST API Best Practices: Partial Updates - PATCH vs. PUT</a></li>
</ol>
RESTful APIs center around resources that are grouped into collections. A classic example is browsing through the directory listings and files on a website like <a href="http://vault.centos.org/">http://vault.centos.org/</a>. When you browse the directory listing, you can click through a series of folders to download files. The folders are collections of CentOS resource files.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://vault.centos.org/" target="_blank"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjP8RUsMCRG8SrWHWy0Y-u6MeBJzcnsoLEsQSRZkurmSQtvlN70hnQD29udsGmm_mLJxoXuT72sBXdymlRuYqSSRR4YJy39r366FS6hBiXJpnsWmEctmg3h3Kn4eLseCh0Rmu8gPfYk1o/s1600/vault.png" height="238" width="320" /></a></div>
<br />
<br />
In Rest, collections and resources are accessed via HTTP URI's in a similar way:<br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">members/ -- a collection of members</span><br />
<span style="font-family: "Courier New",Courier,monospace;">members/1 -- a resource representing member #1</span><br />
<span style="font-family: "Courier New",Courier,monospace;">members/2 -- a resource representing member #2</span><br />
<br />
It may help to think of a REST collection as a directory folder containing files, although its highly unlikely that the member data is stored as literal JSON files on the server. The member data should be coming from a database, but from the perspective of a REST API, it looks similar to a directory called "members" that contains a bunch of files for download. <br />
<br />
<h4>
Naming collections </h4>
<br />
In case it's not obvious already, collection names should be nouns. Use the plural form for naming collections. There's been some debate over whether collection names should be
plural (members/1) or singular (member/1). The plural form seems to
be most widely used.<br />
<br />
<h4>
Getting a collection </h4>
<br />
Getting a
collection, like "members" may return<br />
<ol>
<li>the entire list of resources
as a list of links, </li>
<li>partial representations of each resource, or </li>
<li>full representations of all the resources in the collection. </li>
</ol>
Our classic example of browsing online directories and files uses approach #1, returning a list of links to the files. The list is formatted in HTML, so you can click on the hyperlink to access a particular file.<br />
<br />
Approach #2, returning a partial representation (ie. first name, last name) of all
resources in a collection is a more pragmatic way of returning enough
information about the resources in a collection for the end user to make
a selection to request further details, especially if the collection
can contain a lot of resources. Actually, the directory listings on a website like <a href="http://vault.centos.org/">http://vault.centos.org/</a> display more than just the hyperlink. They include additional meta-data like the last-modified timestamp and file size, as well. This is helpful for the end-user who's looking for an up-to-date file and wants to know how long it will take to download. It's a good example of returning just enough information about the resources for the end-user to be able to make a selection.<br />
<br />
With approach #3, if a collection is small, you may want to return the full representation of all the resources in the collection as a big array. For large collections, it isn't practical, however. Github is the only RESTful API example I've seen that actually returns a full representation of all resources when you fetch the collection. I wouldn't consider #3 to be a "best practice", or recommend it for most use cases, but if you know the collection and resources will be small, it might be more effective to fetch the whole collection all at once like this.<br />
<br />
The best practice for fetching a collection of resources, in my opinion, is #2: return a partial representation of the resources in a collection with just enough information to facilitate the selection process, and be sure to include the URL (href) of each resource where it can be downloaded from.<br />
<br />
Only when a collection is guaranteed to be small and you need to reduce the performance impact of making multiple queries, consider bending the rules with approach #3 to return all the resources in one fell swoop.<br />
<br />
Here's a practical example of fetching the collection of members using approach #2.<br />
<br />
Request<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">GET /members</span><br />
<span style="font-family: Courier New, Courier, monospace;">Host: localhost:8080</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: inherit;">Response</span><br />
<span style="font-family: inherit;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">HTTP/1.1 200 OK</span><br />
<span style="font-family: Courier New, Courier, monospace;">Content-Type: application/json; charset=utf-8</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">[</span><br />
<span style="font-family: Courier New, Courier, monospace;"> {</span><br />
<span style="font-family: Courier New, Courier, monospace;"> "id</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: 1,</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "href": "/members/1",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">firstname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "john",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">lastname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "doe"</span><br />
<span style="font-family: Courier New, Courier, monospace;"> },</span><br />
<span style="font-family: Courier New, Courier, monospace;"> {</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">id</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: 2,</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "href": "/members/2",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">firstname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "jane",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">lastname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "doe"</span><br />
<span style="font-family: Courier New, Courier, monospace;"> }</span><br />
<span style="font-family: Courier New, Courier, monospace;">]</span><br />
<span style="font-weight: normal;"><br /></span>
<span style="font-weight: normal;">In this example, some minimal information is returned about each of the members: first and last name, id, and the "href" URL where the full representation of the member resource can be downloaded.</span><br />
<span style="font-weight: normal;"><br /></span>
<br />
<h4>
Getting a resource</h4>
<br />
Getting a specific resource should returns the full
representation of that resource from the URL that contains the collection name and the ID of the specific resource you want.<br />
<br />
<h4>
Resource IDs</h4>
<br />
RESTful
resources have one or more identifiers: a numerical ID, a title, and so
on. Common practice is for every resource to have a numeric ID
that is used to reference the resource, although there are some notable
exceptions to the rule.<br />
<br />
Resources themselves should contain their numerical ID; the current best practice is for this to exist within the resource simply as an attribute labelled "id". Every resource should contain an "id"; avoid using more complicated names for resource identifiers like "memberID" or "accountNumber" and just stick with "id". If you need additional identifiers on a resource, go ahead and add them, but always have an "id" that acts as the primary way to retrieve the resource. So, if a member has "id" : 1, it should be fairly obvious that you can fetch his details at the URL "members/1".<br />
<br />
An example of fetching a member resource would be:<br />
<br />
Request<br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">GET /members/1</span><br />
<span style="font-family: Courier New, Courier, monospace;">Host: localhost:8080</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: inherit;">Response</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">HTTP/1.1 200 OK</span><br />
<span style="font-family: Courier New, Courier, monospace;">Content-Type: application/json; charset=utf-8</span><br />
<span style="font-family: Courier New, Courier, monospace;"><br /></span>
<span style="font-family: Courier New, Courier, monospace;">{</span><br />
<span style="font-family: Courier New, Courier, monospace;"> "id</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: 1,</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "href": "/members/1",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">firstname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "john",</span><br />
<span style="font-family: Courier New, Courier, monospace;"> </span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">lastname</span><span style="font-family: 'Courier New', Courier, monospace;">"</span><span style="font-family: 'Courier New', Courier, monospace;">: "doe",</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "active": true,</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "lastLoggedIn": "Tue Sep 16 2014 08:37:42 GMT-400 (EDT)",</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "foo": "bar",</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "fizz": "buzz",</span><br />
<span style="font-family: 'Courier New', Courier, monospace;"> "qux": "doo"</span><br />
<span style="font-family: Courier New, Courier, monospace;">}</span><br />
<br />
<h4>
Beyond simple collections</h4>
<br />
Most of the examples you see online are fairly simple, but practical data models are often much more complex. Resources frequently contain sub-collections and relationships with other resources. API design in this area seems to be done in a mostly ad-hoc manner,but there are some practical considerations and trade-offs when designing APIs for more complex data models, which should be covered in the next post.<br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-90246906908965594332014-08-21T20:00:00.000-07:002014-08-27T12:01:41.649-07:00Defensive Shift - Turning the Tables on SurveillanceLike many people lately, I've been pondering the implications of pervasive surveillance, "big data" analysis, state-sponsored security exploits, and the role of technology in government. For one thing, my work involves a lot of the same technology: deep packet inspection, data analysis, machine learning and even writing experimental malware. However, instead of building tools that enable pervasive government surveillance, I've built a product that tells mobile smartphone users if their device, or a laptop connected to it, has been infected with malware, been commandeered into a botnet, or come under attack from a malicious website, and so on. I'm happy to be working on applying some of this technology in a way that actually benefits regular people. It feels much more on the "good side" of technology than on the bad side we've been hearing so much about lately. <br />
<br />
Surveillance of course has been in the news a lot lately, so we're all familiar with the massive betrayal of democratic principles by governments, under the guise of hunting the bogeyman. It's good that people are having conversations about reforming it, but don't expect the Titanic to turn around suddenly. There's far too much money and too many careers on the line to just shut down the leviathan of pervasive surveillance overnight. It will take time, and a new generation of more secure networking technologies.<br />
<br />
Big data has also been in the news in some interesting ways: big data analysis has been changing the way baseball is played! CBC's David Common presents the story <span style="font-size: x-small;">[1]</span>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.cbc.ca/news/world/how-the-defensive-shift-and-big-data-are-changing-baseball-1.2739619" target="_blank"><img alt="http://www.cbc.ca/news/world/how-the-defensive-shift-and-big-data-are-changing-baseball-1.2739619" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKlrHLgybwtDY7JA5K7QIemlKyjlj-cMAWzftzwUOQBDOSnBTo1r4hOrddqm_r2uSb4pT6jPQjBlJz8Ydkz1R48DDrYVv2jc3et5BSaDpcKoAvHUuN0NcSBCt_Bvi60yYAZJUTwGsarKk/s1600/cbc1.png" height="238" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
Not everyone is happy with the "defensive shift" - the process of repositioning outfield players based on batting stats that tell coaches how likely a batter is to hit left or right, short or long. Longtime fans feel it takes away from the human element of the game and turns it into more of a science experiment.<br />
<br />
I tend to agree. And to be honest, until now deep traffic inspection, big data analysis, surveillance, and definitely state-sponsored hacking, have quite justifiably earned a reputation as, well, repugnant to any freedom-loving, democracy-living, brain-having person. Nevertheless, as powerful as big data analytics, machine learning, and network traffic analysis are, and as much as they have been woefully abused by our own governments, I don't think we've yet begun to see the potential for good that these technologies could have, particularly if they are applied in reverse to the way they're being used now.<br />
<br />
Right now we're in a position where a few privileged, state-sponsored bad actors are abusing their position of trust and authority to turn the lens of surveillance and data analysis upon ordinary people, foreign <a href="http://www.cbc.ca/news/politics/why-would-canada-spy-on-brazil-mining-and-energy-officials-1.1931465" target="_blank">business competitors</a>[2], <a href="http://www.cnn.com/2013/09/27/politics/nsa-snooping/" target="_blank">jilted lovers</a> [3], etc. The sea change that will, I think, eventually come is when the lens of technology slowly turns with relentless inevitability onto the government itself, and we have the people observing and monitoring and analyzing the effectiveness of our elected officials and public servants and their organizations.<br />
<br />
How do we begin to turn the tables on surveillance?<br />
<br />
<h4>
Secure Protocols</h4>
As I see it, this "defensive shift" will happen due to several factors. First, because the best and brightest engineers - the ones who design the inner workings of the Internet and write the open-source software used for secure computing - are on the whole smart enough to know that <a href="http://tools.ietf.org/html/rfc7258" target="_blank">pervasive surveillance is an attack</a> and a design flaw [4], are <a href="http://techcrunch.com/2013/10/11/icann-w3c-call-for-end-of-us-internet-ascendancy-following-nsa-revelations/" target="_blank">calling for it to be fixed</a> in future versions of Internet protocols [5], and are already working on <a href="https://www.fsf.org/blogs/community/gnu-hackers-discover-hacienda-government-surveillance-and-give-us-a-way-to-fight-back" target="_blank">fixing</a> some of the known exploits [6].<br />
<br />
One of the simplest remedial actions available right now for pervasive surveillance attacks is HTTPS, with initiatives like <a href="https://www.httpsnow.org/" target="_blank">HTTPS Now</a>[9] showing which web sites follow good security practices, and tools like <a href="https://www.eff.org/https-everywhere" target="_blank">HTTPS Everywhere</a>[10], a plugin for your web browser that helps you connect to websites securely. There is still work to be done in this area, as man-in-the-middle attacks and compromised cryptographic keys are widespread at this point - a problem for which <a href="http://en.wikipedia.org/wiki/Forward_secrecy#Perfect_forward_secrecy" target="_blank">perfect forward secrecy</a>[11] needs to become ubiquitous. We should expect future generations of networking protocols to be based on these security best practices.<br />
<br />
Some people say that creating a system that is totally secure against
all kinds of surveillance, including lawful intercept, will only give
bad people more opportunity to plan and carry out their dirty deeds.
But this turns out not to be true when you look at the actual data of
how much information has been collected, how much it all costs, and how
effective it's actually been. It yields practically nothing useful and
is almost always a "close the barn door, the horse is out!" scenario.
This, coming from an engineer who actually works in the area of
network-based threat analysis, by the way.<br />
<br />
<h4>
Open Data</h4>
Second, the open data movement. Its not just you and I who are producing data-trails as we mobe and surf and twit around the Interwebs. There's a lot of data locked up in government systems, too. If you live in a democracy, who owns that data? We do. It's ours. More and more of it is being made available online, in formats that can be used for computerized data analysis. Sites like the Center for Responsive Politics' <a href="https://www.opensecrets.org/" target="_blank">Open Secrets Database</a> [8], for example, shed a light on money in politics, showing who's lobbying for what, how much money they're giving, and who's accepting the <strike>bribes</strike>, er, donations.<br />
<br />
One nascent experiment in the area of government open data analysis is <a href="http://analyzethe.us/">AnalyzeThe.US</a>,
a site that let's you play with a variety of public data sources to see
correlations. Warning - it's possible for anyone to "prove" just about
anything with enough graphs and hand-waving. For real meaningful
analysis, having some background in mathematics and statistics is a
definite plus, but the tool is still super fun and provides a glimpse of
where things could be going in the future with open government.<br />
<br />
<h4>
Automation</h4>
Third, automation. There's still a long way to go in this area, but even the slowness and inefficiency of government will eventually give way to the relentless march of technology as more and more systems that have traditionally been mired in bureaucratic red tape become networked and automated, all producing data for analytics. Filling in paper forms for hours on end will eventually be as absurd for the government to require as it would be for buying a book from Amazon.<br />
<br />
With further automation and data access, the ability to monitor, analyze and even take remedial action on bureaucratic inefficiencies should be in the hands of ordinary people, turning the current model of Big Brother surveillance on its head. Algorithms will be able to measure the effectiveness of our public services and national infrastructures, do statistical analysis, provide deep insight and make recommendations. The business of running a government, which today seems to be a mix of guesswork, political ideology and public relations management, will start to become less of a religion and more of a science, backed up with real data. It won't be a technocracy - but it will be leveraging technology to effectively crowd-source government. Which is what democracy is all about, after all.<br />
<br />
<br />
[1] <a href="http://www.cbc.ca/news/world/how-the-defensive-shift-and-big-data-are-changing-baseball-1.2739619">http://www.cbc.ca/news/world/how-the-defensive-shift-and-big-data-are-changing-baseball-1.2739619</a><br />
[2] <a href="http://www.cbc.ca/news/politics/why-would-canada-spy-on-brazil-mining-and-energy-officials-1.1931465">http://www.cbc.ca/news/politics/why-would-canada-spy-on-brazil-mining-and-energy-officials-1.1931465</a> <br />
[3] <a href="http://www.cnn.com/2013/09/27/politics/nsa-snooping/">http://www.cnn.com/2013/09/27/politics/nsa-snooping/</a><br />
[4] <a href="http://tools.ietf.org/html/rfc7258">http://tools.ietf.org/html/rfc7258</a> <br />
[5] <a href="http://techcrunch.com/2013/10/11/icann-w3c-call-for-end-of-us-internet-ascendancy-following-nsa-revelations/" target="_blank">http://techcrunch.com/2013/10/11/icann-w3c-call-for-end-of-us-internet-ascendancy-following-nsa-revelations/ </a><br />
[6] <a href="https://www.fsf.org/blogs/community/gnu-hackers-discover-hacienda-government-surveillance-and-give-us-a-way-to-fight-back">https://www.fsf.org/blogs/community/gnu-hackers-discover-hacienda-government-surveillance-and-give-us-a-way-to-fight-back</a><br />
[7] <a href="http://analyzethe.us/">AnalyzeThe.US</a><br />
[8] <a href="https://www.opensecrets.org/">https://www.opensecrets.org/</a><br />
[9] <a href="https://www.httpsnow.org/">https://www.httpsnow.org/</a><br />
[10] <a href="https://www.eff.org/https-everywhere">https://www.eff.org/https-everywhere</a> <br />
[11] <a href="http://en.wikipedia.org/wiki/Forward_secrecy#Perfect_forward_secrecy">http://en.wikipedia.org/wiki/Forward_secrecy#Perfect_forward_secrecy</a><br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com3tag:blogger.com,1999:blog-3302102753324425376.post-20927549994097302992014-08-14T08:32:00.001-07:002016-05-05T06:37:52.612-07:00Repackaging node modules for local install with npm<div class="separator" style="clear: both; text-align: center;">
<a href="https://avatars3.githubusercontent.com/u/6078720?s=400" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="https://avatars3.githubusercontent.com/u/6078720?s=400" width="200" /></a></div>
<br />
If you need to install an npm package for nodejs from local files, because you can't or prefer not to download everything from the <a href="http://npmjs.org/">npmjs.org</a> repo, or you don't even have a network connection, then you can't just get an npm package tarball and do <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">`npm install <tarball>`</span></span>, because it will immediately try to download all it's dependencies from the repo.<br />
<br />
There are some existing tools and resources you can try:<br />
<br />
<ul>
<li>npmbox - https://github.com/arei/npmbox</li>
<li>https://github.com/mikefrey/node-pac</li>
<li>bundle.js gist - https://gist.github.com/jackgill/7687308</li>
<li>relevant npm issue - https://github.com/npm/npm/issues/4210</li>
</ul>
<br />
I found all of these a bit over-wrought for my taste. So if you prefer a simple DIY approach, you can simply edit the module's package.json file, and copy all of its dependencies over to the "bundledDependencies" array, and then run npm pack to build a new tarball that includes all the dependencies bundled inside.<br />
<br />
Using `forever` as an example:<br />
<ol>
<li>make a directory and run <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">`npm init; npm install forever`</span></span> inside of it</li>
<li><span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">cd</span></span> into the <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">node_modules/forever</span></span> directory</li>
<li>edit the <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">package.json</span></span> file</li>
<li>look for the <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">dependencies</span></span> property</li>
<li>add a <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">bundledDependencies</span></span> property that's an array</li>
<li>copy the names of all the dependency modules into the <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">bundledDependencies</span></span> array</li>
<li>save the <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">package.json</span></span> file</li>
<li>now run <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">`npm pack`</span></span>. It will produce a <span style="background-color: #eeeeee;"><span style="font-family: "courier new" , "courier" , monospace;">forever-<version>.tgz</span></span> file that has all it's dependencies bundled in.</li>
</ol>
Update: another proposal from the <a href="https://github.com/npm/npm/issues/4210#issuecomment-210398516" target="_blank">github thread</a> (I haven't verified this yet):<br />
<ol>
<li>In online environment, <code>npm install --no-bin-link</code>. You will have a <strong>entire flattened</strong> <code>node_modules</code>
</li>
<li>Then, bundle this <strong>flawless</strong> <code>node_modules</code> with <code>tar / zip / rar / 7z</code> etc</li>
<li>In offline environment, extract the bundle, that's it</li>
</ol>
<br />
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com1tag:blogger.com,1999:blog-3302102753324425376.post-43266636470710009442014-05-29T20:12:00.002-07:002014-05-29T20:17:28.550-07:00JavaScript's Final Frontier - MIDI<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuzTcSX2cqQKG5VT1vExTFsi9htD7mQwjncxWhH7xIEBrjusWFZned-7XKrJ08FA8hfdmkeCNBJMzit9bPxjP_g5e2qxGt_AN1ANmdya3X-g_bHLy2SaMP_m9mP2jvIwyu8KfOCU-dXWU/s1600/midi_logo.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuzTcSX2cqQKG5VT1vExTFsi9htD7mQwjncxWhH7xIEBrjusWFZned-7XKrJ08FA8hfdmkeCNBJMzit9bPxjP_g5e2qxGt_AN1ANmdya3X-g_bHLy2SaMP_m9mP2jvIwyu8KfOCU-dXWU/s1600/midi_logo.png" /></a></div>
JavaScript has had an amazing last few years. <a href="http://nodejs.org/" target="_blank">Node.JS</a> has taken server-side development by storm. First person shooter <a href="https://blog.mozilla.org/blog/2014/03/12/mozilla-and-epic-preview-unreal-engine-4-running-in-firefox/" target="_blank">games</a> are being built using HTML and JavaScript in the browser. <a href="https://github.com/NaturalNode/natural" target="_blank">Natural language processing</a> and machine learning are being implemented in minimalist JavaScript libraries. It would seem like there's no area in which JavaScript isn't set blow away preconceptions about what it can't do and become a major player.<br />
<br />
There is, however, one area in which JavaScript - or more accurately the web stack and the engines that implement it - has only made a few tentative forays. For me this represents a final frontier; the one area where JavaScript has yet to show that it can compete with native applications. That frontier is <a href="http://en.wikipedia.org/wiki/Midi" target="_blank">MIDI</a>.<br />
<br />
I know what you're probably thinking. Cheesy video game soundtracks on your SoundBlaster sound card. Web pages with blink tags and bad music tracks on autoplay. They represent one use case where MIDI was applied outside of its original intent. MIDI was made for connecting electronic musical instruments, and it is still <i>very</i> much alive and well. From lighting control systems to professional recording studios to GarageBand, MIDI is a key component of arts performance and production. MIDI connects sequencers, hardware, software synthesizers and drum machines to create the music many people listen to everyday. The specification, though aging, shows no signs of going away anytime soon. It's simple and effective and well crafted. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGTHUDgqjARUPdPpkIIt9S3tUB6ohQ-zlNxCbedxV74IXKVUP7IeQuo3M2o601Bs-zz7WIYHSoajIYAeJHxbb9pVlUUw4cGEk1BxgPMOhTHxGLgDVSVCJOMUrtX1Fc0PKk_tFSFRnHw2s/s1600/rack.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGTHUDgqjARUPdPpkIIt9S3tUB6ohQ-zlNxCbedxV74IXKVUP7IeQuo3M2o601Bs-zz7WIYHSoajIYAeJHxbb9pVlUUw4cGEk1BxgPMOhTHxGLgDVSVCJOMUrtX1Fc0PKk_tFSFRnHw2s/s1600/rack.jpg" height="200" width="132" /></a></div>
It had to be. Of all applications, music could be the most demanding. That's because in most applications, even realtime ones, the exact timing of event processing is flexible within certain limits. Interactive web applications can tolerate latency on their network connections. 3D video games can scale down their frames per second and still provide a decent user experience. At 30 frames per second, the illusion of continuous motion is approximated. The human ear, on the other hand, is capable of detecting delays as small as 6 milliseconds. For a musican, latency of 20ms between striking a key and hearing a sound, would be a show-stopper. Accurate timing is essential for music performance and production.<br />
<br />
There's been a lot of interest and some amazing demos of Web Audio API functionality. The <a href="http://www.w3.org/TR/webmidi/" target="_blank">Web MIDI API</a>, on the other hand, hasn't gotten much support. Support for Web MIDI has landed in Chrome Canary, but that's it for now. A few people have begun to look at the possibility of adding <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=836897" target="_blank">support for it in Firefox</a>. Until the Web MIDI API is widely supported, interested people will have to make due with the <a href="http://jazz-soft.net/" target="_blank">JazzSoft midi plugin</a> and Chris Wilson's <a href="https://github.com/cwilso/WebMIDIAPIShim" target="_blank">Web MIDI API shim</a>. <br />
<br />
I remain hopeful that support for this API will grow, because it will open up doors for some truly great new creative and artistic initiatives.Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-48396476433488110782014-05-07T21:27:00.001-07:002015-01-22T19:26:55.576-08:00REST API Best Practices 3: Partial Updates - PATCH vs PUTThis post is a continuation of <a href="http://51elliot.blogspot.com/2014/04/rest-api-best-practices-http-and-crud.html">REST API Best Practices 2: HTTP and CRUD</a>, and deals with the question of partial updates.<br />
<br />
REST purists insist that PATCH is the only "correct" way to perform partial updates [1], but it hasn't reached "best-practice" status just yet, for a number of reasons.<br />
<br />
Pragmatists, on the other hand, are concerned with building mobile back-ends and APIs that simply work and are easy to use, even if that means using PUT to perform partial updates [2].<br />
<br />
The problems with using PATCH for partial updates are manifold: <br />
<ol>
<li>Support for PATCH in browsers, servers and web application frameworks is not universal. IE8, PHP, Tomcat, django, and lots of other software has missing or flaky support for it. So depending on your technology stack and users, it might not even be a valid option for you.</li>
<li>Using the PATCH method correctly requires clients to submit a document describing the differences between the new and original documents, like a diff file, rather than a straightforward list of modified properties. This means the client has to do a lot of extra work - keep a copy of the original resource, compare it to the modified resource, create a "diff" between the two, compose some type of document showing the differences, and send it to the server. The server also has more work to apply the diff file. </li>
<li>There's no specification that says how the changes in the diff file should be formatted or what it should contain, exactly. The RFC simply says: <br /><blockquote class="tr_bq">
"With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version."</blockquote>
One early recommendation for using PATCH is the JSON Patch RFC [3]. Unfortunately, the spec overly complicates updating. I describe a much simpler alternative below, which works with either PATCH or PUT. </li>
</ol>
<br />
<h3>
</h3>
<h3>
Pragmatic partial updates with PUT</h3>
Using PUT for partial updates is pretty simple, even if it
doesn't conform strictly to the concept of Representational State
Transfer. So a fair number of programmers happily use it to implement
partial updates on back-end mobile API servers. It's fair to say that
when developing an API, a pragmatic approach that focuses on the needs
of mobile client applications is completely reasonable.<br />
<br />
Current "best practices" when using PUT for partial
updates, as I see it,
is this: When you PUT the update: <br />
<ol>
<li>Include the properties to be updated, with their new values</li>
<li>Don't include properties that are not to be updated</li>
<li>Set properties to be 'deleted' to null </li>
</ol>
The reality is that most data is going to be stored in a database that has an implicit or explicit schema that describes what sort of data your application is expecting. If you're using a relational database, this will end up being columns in your database tables, some of whose values may be null. In this scenario it makes perfect sense to "delete" properties by setting them null, since the database columns are not going to disappear in any case. And for those who use a NoSQL database, its not a stretch to delete nullified properties.<br />
<br />
<b>Update:</b> This pragmatic approach to updates is used by a number of exemplary SaaS companies, including Github. It can also be used with the HTTP PATCH method, and it has now been formalized in <a href="https://tools.ietf.org/html/rfc7386" target="_blank">RFC 7386 JSON Merge Patch</a> [4].<br />
<br />
<h3>
Further reading</h3>
1. <a href="http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/">http://williamdurand.fr/2014/02/14/please-do-not-patch-like-an-idiot/</a><br />
2. <a href="http://techblog.appnexus.com/2012/on-restful-api-standards-just-be-cool-11-rules-for-practical-api-development-part-1-of-2/">http://techblog.appnexus.com/2012/on-restful-api-standards-just-be-cool-11-rules-for-practical-api-development-part-1-of-2/</a><br />
3. <a href="http://tools.ietf.org/html/draft-ietf-appsawg-json-patch-07">http://tools.ietf.org/html/draft-ietf-appsawg-json-patch-07</a><br />
4. <a href="https://tools.ietf.org/html/rfc7386">https://tools.ietf.org/html/rfc7386</a> Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-46219248394683358182014-04-07T21:28:00.002-07:002017-03-14T09:51:47.642-07:00REST API Best Practices 2: HTTP and CRUDThis post expands a bit further on the <a href="http://51elliot.blogspot.com/2014/03/rest-api-best-practices-rest-cheat-sheet.html">REST API Cheat Sheet</a> regarding HTTP operations for Create / Read / Update / Delete functionality in REST APIs.<br />
<br />
APIs for data access and management are typically concerned with four actions (the so-called CRUD operations):<br />
<ul>
<li><b>Create</b> - the ability to create a resource</li>
<li><b>Read</b> - the ability to retrieve a resource</li>
<li><b>Update</b> - the ability to modify a resource</li>
<li><b>Delete</b> - the ability to remove a resource</li>
</ul>
<br />
CRUD operations don't have a perfect, 1-to-1 mapping to HTTP methods,
which has led to different opinions and implementations, but the following list represents best practice as I see it in the industry today, and follows the HTTP specification:<br />
<br />
<table>
<tbody>
<tr>
<td><b>CRUD Operation </b></td><td><b>HTTP Method</b></td>
</tr>
<tr>
<td>Create</td><td>POST</td>
</tr>
<tr>
<td>Read</td><td>GET</td>
</tr>
<tr>
<td>Update</td><td>PUT and/or PATCH</td>
</tr>
<tr>
<td>Delete</td><td>DELETE</td>
</tr>
</tbody>
</table>
<br />
<ul>
</ul>
To reiterate, HTTP methods can be used to implement CRUD oprations as follows: <br />
<ul>
<li>POST - create a resource</li>
<li>GET - retrieve a resource</li>
<li>PUT - update a resource (by replacing it with a new version)*</li>
<li>PATCH - update part of a resource (if available and appropriate)*</li>
<li>DELETE - remove a resource</li>
</ul>
<div>
<br />
Although PATCH is considered the officially correct and "RESTful" way to do partial updates, it has yet to gain wide adoption. Many popular web application frameworks don't support the PATCH method yet, so in practice, it is not uncommon to use PUT for partial updates even though its not strictly "RESTful". The decision to use PUT vs. PATCH for partial updates is driven by the capabilities of your framework of choice (Rails only recently introduced PATCH, for example) and by the practical requirements of building web/mobile back-end services that actually work and are easy to use, even if they don't satisfy REST purists. More on this in the next post.</div>
<div>
<h3>
</h3>
<h3>
Safe and Idempotent Methods</h3>
<h3>
</h3>
</div>
<div>
The HTTP 1.1 specification defines "safe" and "idempotent" methods [1]. <i>Safe</i> methods don't modify data on the server no matter how many times you call them.<i> Idempotent</i> methods can modify data on the server the first time you call them, but repeating the same call over and over again won't make any difference. Here's a partial list:</div>
<br />
<table>
<tbody>
<tr>
<th>Method </th><th>Safe </th><th>Idempotent</th>
</tr>
<tr>
<td>GET</td><td style="color: green;">✔</td><td style="color: green;">✔</td>
</tr>
<tr>
<td>HEAD</td><td style="color: green;">✔</td><td style="color: green;">✔</td>
</tr>
<tr>
<td>PUT</td><td style="color: red;">×</td><td style="color: green;">✔</td>
</tr>
<tr>
<td>PATCH</td><td style="color: red;">×</td><td style="color: green;">✔</td>
</tr>
<tr>
<td>DELETE</td><td style="color: red;">×</td><td style="color: green;">✔</td>
</tr>
<tr>
<td>POST</td><td style="color: red;">×</td><td style="color: red;">×</td>
</tr>
</tbody></table>
<br />
The safe and/or idempotent nature of these HTTP methods provides some further insight into how they ought to be used. Notice that POST is neither safe, nor idempotent. A successful POST should create new data on the server, and repeating the same call should create even more copies on the server. GET, on the other hand, is safe and idempotent, so no matter how many times you call it, the data on the server shouldn't be affected.<br />
<br />
<b>GET</b> - use it to fetch resources, but don't "tunnel" request parameters through to the server as a way to alter the state of data on the server - as a "safe" method, calling GET shouldn't have side effects.<br />
<br />
<b>PUT</b> - use it to update an existing resource by replacing it with a new representation. The data you PUT to the server should be a complete replacement for the specified resource. Although PUT can in theory be used to insert new resources, in practice it's not advisable. Note that after the first PUT request, repeatedly calling the same PUT method with the same data won't change the data on the server more than it already has been (a condition of idempotent methods).<br />
<br />
<b>PATCH</b> - if this method is available and well supported in both your client and server side technology stack (ie. Rails 4), consider using it to update part of an existing resource by changing some of it's properties, following the recommendations of the framework for how to submit the change descriptions. The PATCH method isn't supported everywhere and not common enough to be considered a current best practice, but the industry seems to be moving this way and technically it's the correct way to provide partial updates according to the HTTP spec [2].<br />
<br />
If your server, framework or client user base (IE8, etc) doesn't support PATCH, rest assured that many developers take the pragmatic approach and simply bend the rules to use PUT for partial updates [3]. I'll cover this in the next post in more detail. Note that, no matter how you do your partial update, it should be <i>atomic</i>, that is once the update has started, it should not be possible to retrieve a copy of the resource until the update has been fully applied.<br />
<br />
<b>POST</b> - use it to create new resources. The server should create a unique identifier for each newly created resource. Return a 201 Created response if the request was successful. The unique ID should be returned in the response; it has been suggested to use the Location header of the response for this, but for most client applications it will be more practical to return the ID in the body of the response. For this reason, best practice currently appears to be to populate both the Location header with the URL of the newly created resource, and also return a representation of the resource in the response body that includes it's id and/or URL. POST is also frequently used to trigger actions on the server which technically aren't part of RESTful API, but provide useful functionality for web applications.<br />
<br />
<b>DELETE</b> - use it to delete resources; it's pretty self-explanatory.<br />
<br />
<h3>
More posts in this series</h3>
<a href="http://51elliot.blogspot.ca/2014/03/rest-api-best-practices-rest-cheat-sheet.html">REST API Best Practices 1: A REST Cheat Sheet </a><br />
<a href="http://51elliot.blogspot.com/2014/05/rest-api-best-practices-3-partial.html">REST API Best Practices 3: Partial Updates - PATCH vs. PUT </a><br />
<a href="http://51elliot.blogspot.ca/2014/06/rest-api-best-practices-4-collections.html">REST API Best Practices 4: Collections, Resources and Identifiers</a><br />
<br />
<br />
[1] <a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html" target="_blank">http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html</a><br />
[2] <a href="http://stackoverflow.com/questions/19732423/why-isnt-http-put-allowed-to-do-partial-updates-in-a-rest-api" target="_blank">http://stackoverflow.com/questions/19732423/why-isnt-http-put-allowed-to-do-partial-updates-in-a-rest-api</a><br />
[3] <a href="http://techblog.appnexus.com/2012/on-restful-api-standards-just-be-cool-11-rules-for-practical-api-development-part-1-of-2/" target="_blank">http://techblog.appnexus.com/2012/on-restful-api-standards-just-be-cool-11-rules-for-practical-api-development-part-1-of-2/ </a> Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com1tag:blogger.com,1999:blog-3302102753324425376.post-20796349736810534852014-03-21T09:55:00.001-07:002014-09-17T19:22:08.492-07:00REST API Best Practices: a REST Cheat SheetI'm interested in REST API design and identifying the best practices for it. Surprisingly, a lot of APIs that claim to be RESTful, aren't. And the others all do things differently. This is a popular area, though, and some best practices are starting to emerge. If you're interested in REST, I'd like to hear your thoughts about best practices.<br />
<br />
REST is not simply JSON over HTTP, but most RESTful APIs are based on HTTP. Request methods like POST, GET, PUT and DELETE are used to implement Create, Read, Update and Delete (CRUD) operations. The first question is how to map HTTP methods to CRUD operations.<br />
<br />
To start, here's a "REST API Design Cheat Sheet" that I typed up and pinned to my wall. Its based on the book "<a href="http://shop.oreilly.com/product/0636920021575.do">REST API Design Rulebook</a>", and the HTTP RFC. I think it reflects standard practice. There are newer and better books on the subject now, but this list covers the basics of HTTP requests and response codes used in REST APIs. <br />
<br />
<h3>
Request Methods</h3>
<ul>
<li>GET and POST should not be used in place of other request methods</li>
<li>GET is used to retrieve a representation of a resource</li>
<li>HEAD is used to retrieve response headers</li>
<li>PUT is used to insert or update a stored resource</li>
<li>POST is used to create a new resource in a collection</li>
<li>DELETE is used to remove a resource</li>
</ul>
<h3>
Response Status Codes</h3>
<ul>
<li>200 "OK" indicates general success</li>
<li>200 "OK" shouldn't be used to return error messages</li>
<li>201 "Created" indicates a resource was successfully created</li>
<li>202 "Accepted" indicates that an asynch operation was started</li>
<li>204 "No Content" indicates success but with an intentionally empty response body</li>
<li>301 "Moved Permanently" is used for relocated resources</li>
<li>303 "See Other" tells the client to query a different URI</li>
<li>304 "Not Modified" is used to save bandwidth</li>
<li>307 "Temporary Redirect" means resubmit the query to a different URI</li>
<li>400 "Bad Request" indicates a general failure</li>
<li>401 "Unauthorized" indicates bad credentials</li>
<li>403 "Forbidden" denies access regardless of authentication</li>
<li>404 "Not Found" means the URI doesn't map to a resource</li>
<li>405 "Method Not Allowed" means the HTTP method isn't supported</li>
<li>406 "Not Acceptable" indicates the requested format isn't available</li>
<li>409 "Conflict" indicates a problem with the state of the resource</li>
<li>412 "Precondition Failed" is used for conditional operations</li>
<li>415 "Unsupported Media Type" means the type of payload can't be processed</li>
<li>500 "Internal Server Error" indicates an API malfunction</li>
</ul>
A note about the PATCH method. There are good reasons to consider using the HTTP PATCH method for partial updates of resources, but because it's not supported everywhere, and because there are workarounds, I haven't added it to my cheat sheet yet.<br />
<br />
<h3>
Other Posts in this series</h3>
<a href="http://51elliot.blogspot.com/2014/04/rest-api-best-practices-http-and-crud.html">REST API Best Practices 2: HTTP and CRUD</a><br />
<a href="http://51elliot.blogspot.com/2014/05/rest-api-best-practices-3-partial.html">REST API Best Practices 3: Partial Updates - PATCH vs. PUT </a><br />
<a href="http://51elliot.blogspot.ca/2014/06/rest-api-best-practices-4-collections.html">REST API Best Practices 4: Collections, Resources and Identifiers</a> <br />
<ul>
</ul>
Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com1tag:blogger.com,1999:blog-3302102753324425376.post-85721540039797185772014-03-19T16:30:00.001-07:002014-03-20T07:13:58.386-07:00Kasa<a href="http://www.flickr.com/photos/30357539@N08/6117972912/" title="photo sharing"><img alt="" src="http://farm7.static.flickr.com/6065/6117972912_5ff401dfc0_m.jpg" height="213" style="border: 1px solid rgb(102, 102, 102);" width="320" /></a><br />
<span style="font-size: 0.9em; margin-top: 0px;"><a href="http://www.flickr.com/photos/30357539@N08/6117972912/">Kasa</a>
© 2010 Darren DeRidder. </span><br />
Umbrella detail, Kinosaki, JapanDarrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-17684136804471127192014-03-11T08:03:00.001-07:002017-01-05T07:18:44.845-08:00When Agile Went Off the Rails<iframe frameborder="0" height="407" scrolling="no" src="//embed.gettyimages.com/embed/149276058?et=C5EAu-hYAEy8Lf8NqDztWA&sig=wQMRP5i9E4wCf3tk99H-C2tpRqf8FeCUHCgVbvJIPb0=" width="507"></iframe>
<br />
Whenever I hear a company say "We follow an agile development process", I can't help but wince a little. The core ideas of agile development are excellent, but somewhere along the way it accumulated quite a lot of codified process, and became its own formal methodology - almost the same thing the Agile Manifesto was trying to counteract. It's not too surprising, since the agile manifesto didn't prescribe any particular project management methodology for implementing its guidelines. So naturally it wasn't long before management professionals began to formalize agile philosophy into a methodology of their own.<br />
<br />
Now one of the original authors of the <a href="http://agilemanifesto.org/" target="_blank">Agile Manifesto</a> has come out with a piece, originally titled "<a href="http://pragdave.me/blog/2014/03/04/time-to-kill-agile/" target="_blank">Time to Kill Agile</a>", in which he makes this point that a formal methodology runs counter to the original goals of the agile development concept. <a href="http://pragdave.me/" target="_blank">Dave Thomas</a> has been hugely influential in the software development field. Aside from being one of the authors of the agile manifesto, <a href="https://en.wikipedia.org/wiki/Programming_Ruby" target="_blank">he's</a> <a href="https://en.wikipedia.org/w/index.php?title=Agile_Web_Development_with_Rails&action=edit&redlink=1" target="_blank">written</a> <a href="http://www.amazon.com/gp/pdp/profile/A2CJPVWAV4KLAL" target="_blank">a lot</a> of other stuff, and he's the guy who coined the phrases "<a href="http://codekata.com/" target="_blank">code kata</a>" and "<a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" target="_blank">DRY</a>" (Don't Repeat Yourself - the maxim developers follow to effectively organize their code). He later renamed the piece "<a href="http://pragdave.me/blog/2014/03/04/time-to-kill-agile/" target="_blank">Agile Is Dead (Long Live Agility)!</a>", which is a better reflection of his current thinking on Agile processes vs agile development's underlying goals.<br />
<br />
Being a critic of Agile is risky; in many cases it seems to have improved the effectiveness of teams a lot. It's working for a lot of people, and they like it.<br />
<br />
But having one of the original authors of the Agile Manifesto come out with this kind of criticism of agile methodology makes a certain amount of healthy skepticism seem appropriate.<br />
<br />
Agile software development, according to Dave Thomas, can't be implemented as a set of methodologies, and the <a href="http://www.agilealliance.org/" target="_blank">managers</a>, <a href="http://agilemethodology.org/" target="_blank">consultants</a> and <a href="http://www.oracle.com/us/corporate/Acquisitions/agile/index.html" target="_blank">companies</a> that have sprung up around Agile have shown a certain level of disregard for what the authors of the Agile Manifesto intended in the first place.<br />
<br />
Dave Thomas has some good advice for teams that want to develop software with agility. He advocates an iterative approach to development, and choosing options that enable future change. He recommends thinking of "agile" in the form of an adverb (agilely, or "with agility"). Programming <i>with agility</i>. Teams that execute <i>with agility</i>.<br />
<br />
I've found that when it comes to managing a project, simple is usually better. What's worked best in my experience is, in a nutshell, to simply encourage communication. Make sure everyone understands the overall objective, how they can contribute to it, what progress has been made and what challenges remain, and importantly, give everyone the opportunity to have their work fully recognized and appreciated on a regular basis. Given the opportunity to work on a challenging project and the chance to have their contributions seen and appreciated by colleagues, most developers will bend over backwards to do their best.<br />
<div>
<br /></div>
<div>
One technique I found effective was a brief (timed with a hard stop) Monday morning meeting where we laid out the objectives for the week ahead, a quick information-gathering hike around the office at the end of the week, and an email re-cap on Friday afternoon highlighting the team's progress. Showing the percentage-towards-completion of major tasks was also a big motivator, as developers began to take pride in seeing their areas of responsibility make steady, visible progress towards completion. We didn't formalize or get locked into one way of doing it, so when our company got acquired and our management structure changed, we adapted pretty easily.<br />
<br />
Perhaps this isn't too far away from the way agile methodology is practiced "by the book". Regardless of the methodology, it's worth noting that the Agile Manifesto wasn't really a call to implement any particular process. It had broader goals in mind:<br />
<ul>
<li>People over processes.</li>
<li>Working software over documentation.</li>
<li>Collaboration over contracts.</li>
<li>Adaptability over planning.</li>
</ul>
</div>
Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-47295191029109646632014-02-22T14:57:00.001-08:002015-01-19T07:02:25.888-08:00Itsukushima JinjaUNESCO World Heritage Site, Itsukushima Shrine, Hatsukaichi, Hiroshima Prefecture, Japan.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<a href="https://www.flickr.com/photos/30357539@N08/12705988744" title="Itsukushima Jinja by D DeRidder, on Flickr"><img alt="Itsukushima Jinja" height="333" src="https://farm6.staticflickr.com/5511/12705988744_a1eb328c6b.jpg" width="500" /></a>
<br />
<span style="font-size: 0.9em; margin-top: 0px;"><a href="http://www.flickr.com/photos/30357539@N08/12705988744/">Itsukushima Jinja</a> <br />© 2014 Darren DeRidder. Originally uploaded by <a href="http://www.flickr.com/photos/30357539@N08/">73rhodes</a></span>Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-89597804839301070442013-12-05T09:43:00.003-08:002014-11-11T20:37:07.820-08:00Node.JS Module Patterns using simple examplesSlides for a recent talk at Ottawa.JS on "<a href="http://darrenderidder.github.io/talks/ModulePatterns/#/" target="_blank">Node.JS Module Patterns using simple examples</a>" are available. The slides have been updated to include a brief intro to Common.JS, examples for exporting named and anonymous functions, objects and prototypes, and an explanation of "exports" vs. "module.exports".<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://darrenderidder.github.io/talks/ModulePatterns/#/" target="_blank"><img alt="http://darrenderidder.github.io/talks/ModulePatterns/#/" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfUkYOJS7AWCvCF8gC58IFqv4LyH7sHeKeNMchbJsLMDenFMwhwuutKaqOcGGtPbdwaMXU85fdp_BucIDjYztEcqFRj9AsRZq5329d6qO9xjcTSG2L3dl2Fj2KKUeDrkEDFFEc0qSdddU/s320/modulepatterns.png" height="201" width="320" /></a></div>
<br />Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0tag:blogger.com,1999:blog-3302102753324425376.post-87413055089228990172013-09-28T16:13:00.000-07:002014-11-11T20:46:51.905-08:00JavaScript EfficienciesI'm working on an article about patterns for structuring Express.JS apps, which is taking too long, so I decided to write this instead: Here are a few tips and tricks for JavaScript programming that I like. <br />
<h2>
Comment switches</h2>
Comment switches let you comment out entire blocks of code with a single character, or switch between two different blocks of code with a single character, which can be useful when prototyping. <br />
<blockquote class="tr_bq">
<span style="color: #666666;"><span style="background-color: white;"><b><span style="font-family: "Courier New",Courier,monospace;">//* </span></b></span></span><br />
<span style="color: #666666;"><span style="background-color: white;"><b><span style="font-family: "Courier New",Courier,monospace;"><span style="color: #444444;">console.log("Hello!\n");</span> </span></b></span></span><br />
<span style="color: #666666;"><span style="background-color: white;"><b><span style="font-family: "Courier New",Courier,monospace;">/*/<span style="color: #0b5394;">
<span style="color: #45818e;"> </span></span></span></b></span></span><br />
<span style="color: #666666;"><span style="background-color: white;"><b><span style="font-family: "Courier New",Courier,monospace;"><span style="color: #0b5394;"><span style="color: #45818e;">console.log("Goodbye!\n");</span></span> </span></b></span></span><br />
<span style="color: #666666;"><span style="background-color: white;"><b><span style="font-family: "Courier New",Courier,monospace;">// */</span></b></span></span>
</blockquote>
Removing the first slash '/' toggles between these two print statements. See the <a href="http://51elliot.blogspot.ca/2007/09/comment-switches.html" target="_blank">original post</a> for more examples of comment switches.<br />
<br />
<h2>
Iterate by Counting Down</h2>
You can iterate n times concisely, like this:<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New",Courier,monospace;">var n = 1000;</span></b><br />
<b><span style="font-family: "Courier New",Courier,monospace;">while (n--) { ... }</span></b></blockquote>
<br />
<h2>
Defaulting Arguments</h2>
This is a handy way to provide a default value for undefined arguments in a JavaScript function.<br />
<blockquote class="tr_bq">
<b><span style="font-family: "Courier New",Courier,monospace;">function foo(bar) {</span></b><br />
<b><span style="font-family: "Courier New",Courier,monospace;"> var bar = bar || "Some default";</span></b><br />
<b><span style="font-family: "Courier New",Courier,monospace;"> ...</span></b><br />
<b><span style="font-family: "Courier New",Courier,monospace;">}</span></b></blockquote>
Darrenhttp://www.blogger.com/profile/00230771763285373052noreply@blogger.com0