CSS Animations
CSS animations allows animation of most HTML elements without using JavaScript or Flash!
CSS
Browser Support for Animations
The numbers in the table specify the first browser version...Continue Reading on Geezgo
There are interactive robots, great learning toys, and all are good
old-fashioned fun. There’s never been a better time to be a kid. The
Sphero BB-8, Osmo Gaming, Kano Computer Kit, Cleverkeet, Powerup, Skeye
Pico Drone,
CoderDojo’s ‘Coolest Projects’ is an event taking place in the RDS next Saturday, 26th May from 10am. Over 1,000 children aged from 7 to 17 from around Ireland and 14 different countries including Argentina, Bulgaria, Spain, Japan and more will be at the event to showcase the tech projects they have developed at their local CoderDojo as well as other coding initiatives and to attend an awards ceremony which awards the best projects across a arrange of categories.
Some of the projects being showcased this year include a face recognition door entry system for the elderly, where regular visitors can register their face so that if they visit, the elderly person will be informed as to who is at the door; a health tech app that enables people with special needs or speech challenges to communicate easily with family and friends using pictures and text to speech features; an adventure game teaching farm safety; and an experiment to determine if the working environment in space is comfortable to live in, with pressure, temperature and humidity being examined. It is currently being tested on the International Space Station with findings being reported at Coolest Projects.
Co-creator of the Raspberry Pi computer, Pete Lomas, will be the keynote speaker at Coolest Projects International 2018 and in his talk he will share how he went from school misfit to joint recipient of the UK’s most prestigious prize for engineering innovation, the MacRobert Award.
The event seeks to inspire educators, engage parents and celebrate creativity through panel discussions, hands on activities, demonstrations and workshops. Lively panels with leading edtech and industry experts will debate the role of technology in education, and how we can empower girls in the sector. Examples of activities young and old can participate in on the day include STEM escape rooms, flying drones, testing out VR headsets, creating wearable technology pieces and a giant circuit board puzzle!
The aim of the event is to celebrate the children’s creativity and innovation and to inspire and encourage both children and adults alike to either get involved in their local coding clubs or to establish their own.
We get asked about Node.js best practices, tips all the time - so this post intends to clean things up, and summarizes the basics of how we write Node.js at RisingStack.
Some of these Node.js best practices fall under the category of Coding style, some deal with Developer workflow.
Coding style
Callback convention
Modules should expose an error-first callback interface.
It should be like this:
module.exports = function (dragonName, callback) {
// do some stuff here
var dragon = createDragon(dragonName);
// note, that the first parameter is the error
// which is null here
// but if an error occurs, then a new Error
// should be passed here
return callback(null, dragon);
}
Always check for errors in callbacks
To better understand why this is a must, first start with an example that is broken in every possible way, then fix it.
// this example is **BROKEN**, we will fix it soon :)
var fs = require('fs');
function readJSON(filePath, callback) {
fs.readFile(filePath, function(err, data) {
callback(JSON.parse(data));
});
}
readJSON('./package.json', function (err, pkg) { ... }
The very first problem with this readJSON function, is that it never checks, if an Error happened during the execution. You should always check for them.
The improved version:
// this example is **STILL BROKEN**, we are fixing it!
function readJSON(filePath, callback) {
fs.readFile(filePath, function(err, data) {
// here we check, if an error happened
if (err) {
// yep, pass the error to the callback
// remember: error-first callbacks
callback(err);
}
// no error, pass a null and the JSON
callback(null, JSON.parse(data));
});
}
Return on callbacks
One of the problems that still exists in the above example, is that if an Error occurs, then the execution will not stop in the if statement, but will continue. This can lead to lots of unexpected things. As of a rule of thumb, always return on callbacks.
// this example is **STILL BROKEN**, we are fixing it!
function readJSON(filePath, callback) {
fs.readFile(filePath, function(err, data) {
if (err) {
return callback(err);
}
return callback(null, JSON.parse(data));
});
}
Use try-catch in sync code only
Almost there! One more thing we have to take care of is the JSON.parse. JSON.parse can throw an exception, if it cannot parse the input string to a valid JSON format.
As JSON.parse will happen synchronously, we can surround it with a try-catch block. Please note, that you can only do this with synchronous codeblocks, but it won't work for callbacks!
// this example **WORKS**! :)
function readJSON(filePath, callback) {
fs.readFile(filePath, function(err, data) {
var parsedJson;
// Handle error
if (err) {
return callback(err);
}
// Parse JSON
try {
parsedJson = JSON.parse(data);
} catch (exception) {
return callback(exception);
}
// Everything is ok
return callback(null, parsedJson);
});
}
Try to avoid this and new
Binding to a specific context in Node is not a win, because
Node involves passing around lots of callbacks, and heavy use of
higher-level functions to manage control flow. Using a functional style
will save you a lot of trouble.
Of course, there are some cases, when prototypes can be more efficient, but if possible, try to avoid them.
Create small modules
Do it the unix-way:
Developers should build a program out of simple parts
connected by well defined interfaces, so problems are local, and parts
of the program can be replaced in future versions to support new
features.
Do not build Deathstars - keep it simple, a module should do one thing, but that thing well.
Errors can be divided into two main parts: operational errors and programmer errors.
Operational errors
Operational errors can happen in well-written applications as well,
because they are not bugs, but problems with the system / a remote
service, like:
request timeout
system is out of memory
failed to connect to a remote service
Handling operational errors
Depending on the type of the operational error, you can do the followings:
Try to solve the error - if a file is missing, you may need to create one first
Retry the operation, when dealing with network communication
Tell the client, that something is not ok - can be used, when handling user inputs
Crash the process, when the error condition is unlikely to
change on its own, like the application cannot read its configuration
file
Also, it is true for all the above: log everything.
Programmer errors
Programmer errors are bugs. This is the thing you can avoid, like:
called an async function without a callback
cannot read property of undefined
Handling programmer errors
Crash immediately - as these errors are bugs, you won't know
in which state your application is. A process control system should
restart the application when it happens, like: supervisord or monit.
Workflow tips
Start a new project with npm init
The init command helps you create the application's package.json file. It sets some defaults, which can be later modified.
Start writing your fancy new application should begin with:
mkdir my-awesome-new-project
cd my-awesome-new-project
npm init
Specify a start and test script
In your package.json file you can set scripts under the scripts section. By default, npm init generates two, start and test. These can be run with npm start and npm test.
Also, as a bonus point: you can define custom scripts here and can be invoked with npm run-script <SCRIPT_NAME>.
Note, that NPM will set up $PATH to look in node_modules/.bin for executables. This helps avoid global installs of NPM modules.
Environment variables
Production/staging deployments should be done with environment variables. The most common way to do this is to set the NODE_ENV variable to either production or staging.
Depending on your environment variable, you can load your configuration, with modules like nconf.
Of course, you can use other environment variables in your Node.js applications with process.env, which is an object that contains the user environment.
Do not reinvent the wheel
Always look for existing solutions first. NPM has a crazy
amount of packages, there is a pretty good chance you will find the
functionality that you are looking for.
It is much easier to understand a large codebase, when all
the code is written in a consistent style. It should include indent
rules, variable naming conventions, best practices and lots of other
things.
Structured data stored in relational databases has ruled the world for
the last 40 years. Over that time, Structured Query Language (SQL)
emerged as the standard for accessing and manipulating data stored in
relational database management systems. The main reason for the
popularity of SQL was the ease of programming it provided by
encapsulating and abstracting how data is stored, thereby removing a
step in the process and allowing developers to focus on what they wanted
done. Thus, SQL drove the adoption of relational databases to near
ubiquity.
However, we’ve started to hit the limitations of what relational systems
can do. Data no longer follows a uniform structure. With the
digitization of communication and commerce, we now have social data,
scientific data, IoT data, blogs, tweets, and other data that do not fit
the relational structure. Additionally, today’s businesses demand
agility and rapid changes to applications, which means frequent changes
to the schema of the data.
The rigid schema requirements of relational databases are a roadblock to
releasing fast, scalable, and responsive applications. Developers and
enterprises are increasingly expected to bring products to market faster
and cheaper. The need for dynamic schema evolution demands not only a
rethink of data models and databases, but also a new method to access
this data -- a query language.
A new data model
Before we get to the query language, let’s first examine the data model.
Since the early '90s, most business applications were developed in
object-oriented programming models. The popularity of graphical user
interfaces and subsequently the Web made this type of programming the
norm for developing business and customer-facing applications. In Web
applications specifically, JSON is the open standard format that uses
human-readable text to represent data in objects. JSON is what gets
transmitted between server and Web applications. NoSQL databases
designed to store and manage JSON documents started gaining popularity
in the early 2000s thanks to the increase of unstructured data.
More and more businesses are adopting NoSQL databases to support the
broad set of use cases for the next-generation of personalized,
context-sensitive, and location-aware applications. Modern developers
love the flexibility of a JSON database because it represents data in
the same object-based way as their preferred languages (Java, C++, .Net,
Python, Ruby, and so on) without the rigid schema requirements of a
relational database. However, developers building on a NoSQL database
with a JSON data model have been limited by the lack of a query
language.
As a result, the emergence of JSON-based NoSQL databases without a
standard rich query language forced programmers into a dilemma: Either
leverage the power of standard SQL, but be constrained by a rigid
relational model, or develop on a flexible JSON data model, but accept
data model and query limitations.
While the benefits of using JSON are clear, a standard query language
making it easier for developers to build applications is also important.
To give JSON the query language it deserves, a clear starting point was
to simply extend the most popular query language, SQL.
A new query language
N1QL (pronounced "nickel") is a new query language that extends SQL to
work on JSON documents. Put in more technical terms, JSON represents a
non first normal form data model (N1NF), and N1QL operates on that data
model. N1QL extends SQL from traditionally operating on tables and rows
(tuples) to operate on JSON (nested documents). It’s built on a nested
recursive algebra for a nested recursive data model.
Extending SQL to JSON is like reinventing the gas-powered car by giving
it an electric engine, but not changing the steering wheel or any of the
mechanisms that affect how you operate the car. Developers can now
dynamically extend an application’s schema (handled at runtime by the
new query engine), while still using the same familiar SQL language to
operate it.
The image below shows how you would write a JOIN in SQL and how you
would write a JOIN in N1QL. This is a very simple example. To learn more
about what can be done with N1QL visit the Couchbase N1QL tutorial.
Building a new language as an extension to SQL provides the advantage of
using the same vocabulary and syntax as SQL. Thus, for the first time,
developers can perform complex references using JOINs in NoSQL document
databases. In addition, all the standard SQL language elements such as
statements, clauses, expressions, predicates, operators, aggregation,
and ordering remain the same.
However, because the underlying data model is different and schema
evolution is dynamic at runtime, some additions needed to be made to
extend SQL’s power to JSON. N1QL adds the verbs NEST and UNNEST for composing and flattening nested objects, a family of operators (such as IS NULL, IS MISSING) for handling dynamic schema, and ARRAY functions and operators for traversal, filtering, and recursive processing of array elements.
For the next phase of application development, developers need new query
languages that are architected for schema flexibility and schema
evolution. By pairing the flexibility of an innovative data model with
the power of a query language known by millions of developers and
business analysts, we lay the foundation for building much more flexible
and powerful applications.
JSON and distributed document-oriented databases are here to stay. Let’s make the most of them.
Ravi Mayuram is senior vice president of products and engineering at Couchbase.
New Tech Forum provides a venue to explore and
discuss emerging enterprise technology in unprecedented depth and
breadth. The selection is subjective, based on our pick of the
technologies we believe to be important and of greatest interest to
InfoWorld readers. InfoWorld does not accept marketing collateral for
publication and reserves the right to edit all contributed content. Send
all inquiries to newtechforum@infoworld.com.
There’s a fine line between bravery and idiocy, and it’s usually
determined by the outcome. Such is true in conflict and in IT. One of
the major benefits of experience in either is that you develop a sixth
sense about when discretion truly is the better part of valor.
Shakespeare may have intended this as a joke, but it rings true.
Before we undertake any major action, whether pushing a major new app
version to production, migrating massive data sets from one storage
array to another and cutting over to production, or performing intricate
tasks required to maintain a production system without removing it from
production, we hedge our bets. Well, we should hedge our bets.
I try to imagine any possible blocking problems beforehand and determine
if there are ways to deal with them before they happen. If at all
possible, I like to have already scripted a reversion method to reset
everything back to before any work was done, akin to pulling a ripcord. I
like to leave nothing to chance.
There may be a time during the work when the infrastructure is in an
extremely precarious position, but I like to limit that exposure as much
as possible and have a clear path back to safety. This concept is built
into some code deployment methods, but it’s not as easy in IT in
general.
As we all know, IT is a fickle beast, and there are eventualities that
can’t be fully accounted for. A storage array intended as temporary
holding space that was completely stable for months will throw a disk or
two halfway through the process, becoming a major bottleneck at best or
completely blowing up the migration at worst. Or an order of operations
mistake will be made, and you'll find yourself painted into a corner --
the only questions being how dirty you will get trying to get out and
what you will have to sacrifice along the way.
With enough experience in this world, you can see some of these
possibilities before they happen. You can either bail on the planned
maintenance or upgrade or quickly develop an alternate plan that evades
the problem. However, if more than a few of those issues crop up, even
if there’s a seemingly clear path to success, you may hear that little
IT voice in the back of your head screaming that it’s a trap, and it’s
better to walk away while you still can. It’s usually wise to listen to
that voice.
The basic concept is that no matter what, we should never lose data or
systems during any IT function. Even if everything goes completely
pear-shaped, the resulting questions should center on how long it will
take to recover, not if it can be recovered. Even if it requires a few
extra days of preparation beforehand, there should always be a way to
undo whatever work is being done. It may cost more money in the form of
backup storage or systems, but it’s always worth it, even if it’s
ultimately not needed.
This is where the cowboys come in. It’s in the midst of sensitive and
delicate operations where unforeseen problems appear that a cowboy admin
will push forward without a safety net and try to reach the other side.
If he succeeds, everyone’s thrilled and admiring, and rounds of beers
will be bought at the pub. If he fails, everyone sticks around for hours
or even days of constant stress and pressure until whatever can be
recovered is recovered. These are situations that you don’t want to be
part of if you can help it, because they usually don’t end well.
There’s a trick to determining if the move was a true cowboy move,
however, because to an observer it may be hard to distinguish. I’ve made
plenty of unorthodox saves in the middle of crises that some might
consider unusual or avant garde, but with a backup plan in place if at
all possible. It might be as simple as SCPing a broken management VM
from one array to another in order to repair it and bring it up on
stable storage to facilitate further saving migrations, or reworking
iSCSI LUN masking on the fly to block certain problem servers from
overloading a failing storage array in order to allow a fragile recovery
to complete.
Full disclosure: I've had my share of cowboy moments with no safety net. I'm pretty sure most of us have.
If we lived in a perfect world, these things wouldn’t require any
thought or planning at all. Big data and VM migrations, app and database
rollouts and upgrades -- everything would be as easy and natural as
breathing. We have made great strides in this area over the past few
decades, and there may come a day when that is possible, but it’s
certainly not today. There is no magic bullet; there is only Zuul.
Don't get us wrong: In today's quickly evolving tech world, it's easy to
get lost chasing the turbulent present moment. The pace of change can
be dizzying, and keeping up on everything that's emerging in IT today
can drive even the most devoted tech worker to distraction.
But IT pros who don't take the time to lift their heads and assess the
likely IT landscape five years out may be asking for career trouble.
Because one fact is clear: Organizations of all stripes are increasingly
moving IT infrastructure to the cloud. In fact, most IT pros who've
pulled all-nighters, swapping in hard drives or upgrading systems while
co-workers slept, probably won't recognize their offices' IT
architecture -- or the lack thereof -- in five years.
This shift will have a broad impact on IT's role in the future -- how
departments are structured (or broken up), who sets the technical vision
(or follows it), and which skills rise to prominence (or fall away
almost entirely).
Here we'll look at how the cloud is changing the way IT departments work
and how, five years from now, staff and managers will need to adapt to a
cloud-driven environment.
Cutting the wires
When you step off the elevator at the office or data center five years
from now, what will you see? Fewer servers and fewer co-workers, most
likely. Maintaining on-premises data centers is a costly endeavor, much
more so than connecting to the cloud. If the current trend toward moving
infrastructure to the cloud is any indication, organizations that
haven't already done so will carefully consider those expenses -- and
many will ultimately decide to trim them over the next five years.
"IT managers will have to support applications, not equipment. They'll
have to be flexible, adaptable, and inclusive." -- Chris McKewon,
founder and chief architect, Xceptional Networks
The skills necessary to thrive in IT will evolve as well.
"Ten years ago, IT staff were physically plugging special storage cables
into special switches," says Mathew Lodge, vice president in VMware's
cloud services group. "Today they're allocating virtual storage volumes
across the network, and some applications simply do their own storage
allocation via APIs. The future is about enabling the deployment and
consumption of cloud services, not installing, configuring, and managing
stacks."
"Cloud services are disrupters," concurs Jim Rogers, CMO at unified
communications and cloud services company iCore Networks. "They disrupt
the idea that IT departments need to spend most of their time on-site
performing mundane tasks. IT departments now have more viable options to
outsource and automate these tasks than ever before."
As companies' infrastructure needs move increasingly to the cloud, so too will jobs dedicated to maintaining racks.
"IT managers will need good network engineers, help desk staff, security
managers, and business analysts," says Chris McKewon, founder and chief
architect of IT consulting company Xceptional Networks. "But they won't
need server/storage engineers, systems administrators, or data center
managers."
The result will be a fundamental shift in IT's overarching mission at
most organizations, with the support-and-maintain mind-set giving way to
a more strategic, software-centric vision for IT. In fact, the IT staff
of the future is likely to need the skills of a businessperson to stay
current, as their company's software requirements and the options for
satisfying them will be deep, varied, and changing quickly.
"The days of server-hugging, deep domain expertise, and IT-only
certifications and training are long gone." -- Tim Prendergast, CEO and
founder, Evident.io
"IT managers will have to support applications, not equipment," McKewon
says. "They'll have to be flexible, adaptable, and inclusive. It will be
difficult to set standards on what hardware will and won't work. The
users will do that for them. And cloud-based single sign-on will become
one of the most important elements to a successful cloud strategy. Users
don't want to manage 50 login names and passwords for 50 different
applications."
"The IT department won't need to be onsite monitoring and recovering
devices and systems to ensure they're ready for use," says iCore's
Rogers. "Instead, the IT professionals can spend more time as strategic
planners and business analysts who ensure their organizations are
structured appropriately to support cloud-based office communications.
They'll be responsible for vendor management and integration processes."
And, he says, IT pros "will be educators, hosting essential end-user
trainings for colleagues."
Tim Prendergast, formerly of Adobe and now CEO and founder of AWS
infrastructure security firm Evident.io, sees more crossover roles in
the future.
"They'll look like today's devops and full-stack engineer roles,"
Pendergrast says. "We'll see IT become less-siloed ... and heavily
staffed by software engineers. Staff in existing roles will have the
opportunity to grow and embrace new technologies and practices for the
new era of cloud computing, and take advantage of the value found in
rapid iteration environments. The days of server-hugging, deep domain
expertise, and IT-only certifications and training are long gone."
That said, not all legacy systems will disappear. In fact, some may
remain critically important to the business for years to come, whether
IT likes it or not. And somebody will need to care for and feed them.
"Many project managers continue to focus on battling tech debt because
of old technology, bad technology decisions, and one-off technology
patches that continue to drive complexity and reduce speed," says Curt
Jacobsen, principal at PricewaterhouseCoopers. "This battle will be
inevitable -- and IT managers will be managing those legacy issues for a
long time."
IT roles in flux
Here's the big question: As the cloud continues to gain traction, will
companies need a fully staffed IT department? As you may have guessed,
few believe the IT department will disappear. Companies will still
require talented staff who can -- at the very least -- manage systems
integration. But an IT department five years from now will need to keep
pace with nearly constant change.
"The more complex and interconnected these cloud environments are going
to become, the higher amount of a general understanding and knowledge of
how it all works together will be required from IT teams." -- John
Matthews, CIO, ExtraHop
"I will say that I think the number of implementation and ops-focused
roles will decrease, and those IT staff will have to switch to a
strategic mind-set," says Roman Stanek, CEO of GoodData. "Leaders who
were once focused on operations will have the opportunity to dive more
deeply into the blending of business need with technologies, data
science, data monetization. IT will no longer be the people who try to
manage your database; they'll be the people who are thinking of new ways
to monetize, share, and use your data for organization-wide success."
James Quin, senior director at B-to-B marketing firm CDM Media, says
he's already seeing radical changes in how IT departments operate and
how companies are structuring them.
"The IT department isn't going away, and the role of the CIO isn't going
to be marginalized. But as more workloads shift to the cloud, the
construction of the IT department, by necessity, must change away from
traditional roles to those more focused on vendor, business, security,
and service management," Quin says. "This doesn't mean that development
and administration jobs go away, just that there are fewer of them."
"There will be a cross-pollination between development and IT
operations, with IT teams becoming much more application- and
developer-savvy, and dev teams understanding the impacts of development
choices on operations." -- Mathew Lodge, vice president, VMware cloud
services group
The jobs that remain, Quin says, will focus on what he calls the "shim"
layer that integrates different public cloud services with a few
applications that must remain in-house. These could include highly
sensitive corporate (or scientific) data or medical records and images,
for example.
John Matthews, CIO of IT operations analytics company ExtraHop, is a
20-year veteran of the industry. He says he's seen this sort of sea
change before.
"Like 10 years ago, where we had vertical specialties around things like
phone systems, we will now employ vertical experts who are 100 percent
dedicated to how to make things work in cloud IT environments such as
AWS and Azure," Matthews says. "Specific names of IT positions and what
their roles entail will change, but the function will be the same as
today -- or even 10 years ago. There will be roles best suited for the
general IT knowledge worker, and there will be those that require a
specialist's touch. For example, a lab manager's role might morph and be
70 percent focused on managing workloads in a system like AWS, which
will provide them with additional tools to take on more tasks across the
network."
This is where the cloud's supposed push-button simplicity gives way to a
key facet of IT work in the years to come: the ability to navigate the
complexity of intermixed cloud environments.
"As these projects will span across both on-prem and cloud resources,
the legal aspects of data privacy, data sovereignty, and cryptography --
who has access to keys -- will all come into play as much as IT
engineering." -- Steve Shah, VP of product management, Citrix
"The more complex and interconnected these cloud environments become,
the higher amount of a general understanding and knowledge of how it all
works together will be required from IT teams," Matthews says. "IT will
still need someone who understands and specializes in certain aspects
like storage. These departments will also need their personnel to
understand how storage works across an entire complex cloud environment
and the different aspects of what that relational environment entail.
The days of simple technology verticals are over. If you want to build
it, maintain it, or fix it, you have to be able to see and understand
how it all connects together."
Projecting the future
Some experts see the cloud benefiting the IT department by paving the
way for staffers to expand their roles, doing more development work,
coding, tying systems together, and creating flexible applications that
resemble platforms.
"For a long time, a lot of what went into making the business successful
was the meat-and-potatoes tasks like racking and stacking," says
ExtraHop's Matthews. "But the transition away from those traditional ops
tasks has already happened. Today, the most important thing IT can do
for the business is to configure devices and applications to maximize
performance, control access, and ensure that devices, systems, and
applications are secure."
VMware's Lodge sees a shift in philosophy, where IT collaborates with
the business side to choose what applications are needed, then supports
those applications and ensures compliance.
"[IT staff] will become the 'ops' part of 'devops' because development
teams don't want to do ops -- they want to develop code," Lodge says.
"So there will be a cross-pollination between development and IT
operations, with IT teams becoming much more application- and
developer-savvy, and dev teams understanding the impacts of development
choices on operations."
Steve Shah, VP of product management at Citrix, sees a rising need for
security skills in the years to come, given IT's expanding role in
development and automation projects.
"As these projects will span across both on-prem and cloud resources,"
Shah says, "the legal aspects of data privacy, data sovereignty, and
cryptography -- who has access to keys -- will all come into play as
much as IT engineering."
"The reality is, IT departments are already evolving. In five years,
they'll look more like miniature software companies, with staff
dedicated to solving their customers' problems." -- Curt Jacobsen,
principal, PricewaterhouseCoopers
Sean Jennings, co-founder and senior vice president at cloud-based
enterprise software company Virtustream, sees new opportunities for IT
staff, optimizing business applications for mobile workforces and making
the most of company data.
"IT managers will help mine the vast troves of unstructured data that
organizations have … resulting in increased collaboration with other
departments," Jennings says. "In many cases, IT managers will be
reporting to line-of-business executives and even up to the C-suite --
from the CTO to CIO to CFO and even CEO. We'll see an evolution in the
skills required of IT, with increased emphasis on creative thinking,
problem-solving, and collaboration."
Jeff Sutherland is one of the inventors of the scrum
development process. He also created, along with Alistair Cockburn, Ken
Schwaber and others, the Agile Manifesto in 2001. He's currently CEO of Scrum, Inc., and a speaker, author and thought leader in software development.
Sutherland's latest book, Scrum: The art of doing twice the work in half the time,
is available now. Sutherland sat down with CIO.com's Sharon Florentine
at Agile Alliance 2015 to talk about the past, present and future of
scrum and agile.
CIO: First, Jeff, there's often some
confusion in the C-suite about the differences between scrum and agile
-- and I won't even bring in some of the new iterations like Kanban --
can you break down the basic differences between them?
Sutherland: Sure. To put it very simply,
agile is the larger set of values that can be applied to product
development and management -- or almost anything -- and Scrum, extreme
programming (XP), Kanban, are different subsets, different languages to
describe those principles. Scrum is specific to the software development
practice, its lean product development, but agile is a bigger framework
that you can use in, say, marketing or sales or most other aspects of
business.
CIO: Scrum is celebrating its 20th birthday this year. How has scrum changed in the last 20 years?
Sutherland: Well, first, this
started long before 1995. Back in the 1980s I was hired by a large bank
to help with their projects because everything was broken. Their
developers were burned out, their products were bad, they were always
late, their customers weren't ever happy; all their attempts to fix
these problems were failing, too.
Around this time, I implemented the scrum prototype into one
of their business units, and that quickly became the most profitable
area of the business. Five companies later, in 1993, I was hired to
build an entirely new set of software tools and the company I worked for
needed an entirely new process to do it. And we decided to formalize
that; that's Scrum.
A lot of scrum is based on lean product development
principles, and especially Taiichi Ohno's principles of flow that Toyota
uses in their production methods. At that point, I asked ["Agile
Manifesto" co-author] Ken Schwaber to come in and help with this. He was
CEO of a product management company at the time, but it wasn't working
out -- in fact, he said to me that if he ever tried to use the products
he was selling, he'd end up going bankrupt, so why the hell not try and
sell scrum?
We started formalizing it from there, and that was 1995.
Fast forward to 2001, I got an email from Ken saying he wanted to take
scrum, remove the engineering piece and come up with a formalized
statement that would focus on the process management piece, instead. We all got together
-- the three of us from Scrum, Ken, Alistair and a bunch of other
engineers and sat around trying to figure out how to do this. It took us
a whole day just to decide that we were going to call this 'agile,' and
by the second day, a bunch of guys gave up and just went skiing
instead. [Laughs] Then, Ken and I were in the room with Martin Fowler
and he said, 'Great teams are based on the way they work together, not
on the processes and the tools they use. The whole point is having
working software and getting customers involved directly in the
process.' And when the other guys came back from skiing, we had it and
went from there.
And the scrum framework today is almost exactly the same as
it was in 1993 -- the biggest change is that we have a lot more tools
and technologies to implement it. We have more people using it, more
knowledge and resources to help people understand it and advocate for
it.
CIO: Did you have any sense back then that this methodology you'd created would have such a huge impact on software development?
Sutherland: Oh, of course not. I had no
idea. It's just incredible to look around and see the amazing things
that are happening, even in places you wouldn't expect. There are large
government agencies we're meeting with running projects that look more
like they came out of Google than the government. Everywhere, now,
people are realizing that scrum and agile can make everything faster and
more efficient -- I've even heard of it being used to plan weddings.
Weddings!
CIO: How have scrum and agile affected development culture for the better?
Sutherland: It's changed so many
things both from an individual level to a large business level. I
remember talking to software developers who would say, 'Our projects are
always late, we always have too many bugs, the management says we're
bad developers and the customer is always pissed.' And I would ask how
long this cycle had been going on, and every single one of them said,
"It's been going on as long as I've been in software." So, then, my next
question would always be, 'Do you want to continue to do your job this
way? Do you want to continue to live your life like that?'
Here's an example -- not in software, but of the same kind of principles. Accion Investment Group
is a nonprofit I've done work for that works mainly in South America
that gives micro-loans to impoverished families and helps them start
small businesses in their communities. We'd lend them, say, $25.00, and
then we'd coach them on how to scale their business. Once they paid the
loan back, they'd have extra money. Suddenly, a woman who couldn't feed
her kids three weeks before had enough money to buy clothes and shoes.
That means she can send her kids to school. That means her kids will get
an education and start reinvesting in their own community -- it all
takes off from there.
There are parallels in the software development and business
communities with this. Once an organization makes the transformation to
agile or scrum or one of these iterations, there's often so much money
left on the table, they can turn around and invest in their people and
their own business processes and do it really, really well. It
definitely helps, of course, that so much is digital and software-driven
nowadays.
CIO: What new business challenges are arising now, and what new methodologies will arise to address them?
Sutherland: So many major companies right
now are realizing they have to go through a transformation driven by
scrum, or they won't survive. SAP, for example, spent billions on a
product they couldn't sell a few years ago, and after that they had to
look internally and said, 'We have to change and adapt.' The thing is,
nothing 'new' will ever be able to replace the principles Scrum is built
on, because they are so fundamental to human behavior and existence, in
my opinion.
But I will say probably half my time is spent with
management teams helping them better focus, better adapt so they can
build a product organization that will be razor sharp on setting their
priorities. And how they can build stable teams that are way more
productive. Instead of throwing more people at projects that are
failing, they should be focused on throwing projects at existing, stable
teams.
CIO: Why do some CIOs see scrum and agile as being strategic and necessary, and some don't? What's the disconnect?
Sutherland: This is still a relatively new
paradigm, and often those paradigms don't fully shift until the old
guard who made their living by them dies out. In a market like we have
now, though, the market can accelerate that decline. Some of the old
guard aren't feeling the market pressure yet, or they think they're
fine, they're making enough money and the market will change back in
their favor.
As an example, I'll just say, how did that work out for Nokia?
We've unleashed a revolution, and those companies that get
this and do it well, are going to survive. Those that don't, won't. It
may not be tomorrow or next week, but they will not survive. It's a
different way not just of working it's a different way of living.
This story, "Scrum’s co-creator talks about the framework’s transformational effect" was originally published by
CIO.
It seems like there are lots of programmers out there these days, and lots of really good programmers. But which one is the very best?
Even though there’s no way to really say who the
best living programmer is, that hasn’t stopped developers from
frequently kicking the topic around. ITworld has solicited input and
scoured coder discussion forums to see if there was any consensus. As it
turned out, a handful of names did frequently get mentioned in these
discussions.
Use the arrows above to read about 15 people commonly cited as the world’s best living programmer.
Main claim to fame: The brains behind Apollo’s flight control software
Quotes: “Hamilton invented testing , she pretty much formalised Computer Engineering in the US.” ford_beeblebrox
“I think before her (and without disrespect
including Knuth) computer programming was (and to an extent remains) a
branch of mathematics. However a flight control system for a spacecraft
clearly moves programming into a different paradigm.” Dan Allen
“... she originated the term ‘software engineering’ — and offered a great example of how to do it.” David Hamilton
Main claim to fame: Author of The Art of Computer Programming
Quotes: “... wrote The Art of Computer Programming which is probably the most comprehensive work on computer programming ever.” Anonymous
“There is only one large computer program I have used in
which there are to a decent approximation 0 bugs: Don Knuth's TeX.
That's impressive.” Jaap Weel
Quotes: “... probably the most
accomplished programmer ever. Unix kernel, Unix tools, world-champion
chess program Belle, Plan 9, Go Language.” Pete Prokopowicz
“Ken's contributions, more than anyone else I can think of,
were fundamental and yet so practical and timeless they are still in
daily use.“ Jan Jannink
Quotes: “... there was the time when he
single-handedly outcoded several of the best Lisp hackers around, in the
Symbolics vs LMI fight.” Srinivasan Krishnan
“Through his amazing mastery of programming and force of will, he created a whole sub-culture in programming and computers.” Dan Dunay
“I might disagree on many things with the great man, but he is still one of the most important programmers, alive or dead” Marko Poutiainen
“Try to imagine Linux without the prior work on the GNu project. Stallman's the bomb, yo.” John Burnette
Quotes: “He wrote the [Pascal] compiler in assembly
language for both of the dominant PC operating systems of the day (DOS
and CPM). It was designed to compile, link and run a program in seconds
rather than minutes.” Steve Wood
“I revere this guy - he created the development tools that
were my favourite through three key periods along my path to becoming a
professional software engineer.” Stefan Kiryazov
Quotes: “... he is the same guy who has written an
exceptional search framework(lucene/solr) and opened the big-data
gateway to the world(hadoop).” Rajesh Rao
“His creation/work on Lucene and Hadoop (among other
projects) has created a tremendous amount of wealth and employment for
folks in the world….” Amit Nithianandan
Quotes: “To put into prospective what an
achievement this is, he wrote the Linux kernel in a few years while the
GNU Hurd (a GNU-developed kernel) has been under development for 25
years and has still yet to release a production-ready example.” Erich Ficker
“Torvalds is probably the programmer's programmer.” Dan Allen
Quotes: “He wrote his first rendering engine
before he was 20 years old. The guy's a genius. I wish I were a quarter a
programmer he is.” Alex Dolinsky
“... Wolfenstein 3D, Doom and Quake were revolutionary at the time and have influenced a generation of game designers.” dniblock
“He can write basically anything in a weekend....” Greg Naughton
“He is the Mozart of computer coding….” Chris Morris
Code hosting website, GitHub, has published a graph which
shows just how popular different programming languages are on the site
since its launch in 2008. The results revealed some interesting trends
and how different languages have picked up momentum in recent years.
Image from GitHub
The graph ranks the languages used in public and private
repositories on GitHub and shows that Java has gained the most traction.
It was ranked the 7th most popular language on the website in 2008 but
has now shot to second place. Not surprising given how Java is often
used to build open source software. In fact, open source is probably one
of the reasons the ranks for each programming language have shifted a
lot in the last few years.
GitHub notes another contributing factor for Java’s
popularity could be the growth of the Android OS and the increasing
demand for version control platforms at businesses and enterprises.
We want to know what programming language do you work with the most and why. Let us know in the comments.
This is the third part of An Introduction to GameplayKit. If you haven't yet gone through the first part and the second part, then I recommend reading those tutorials first before continuing with this one.
Introduction
In this third and final tutorial, I am going to teach you about two more features you can use in your own games:
random value generators
rule systems
In this tutorial, we will first use one of GameplayKit's
random value generators to optimize our initial enemy spawning
algorithm. We will then implement a basic rule system in combination
with another random distribution to handle the respawning behavior of
enemies.
For this tutorial, you can use your copy of the completed project
from the second tutorial or download a fresh copy of the source code
from GitHub.
1. Random Value Generators
Random values can be generated in GameplayKit by using any class that conforms to the GKRandom protocol. GameplayKit provides five classes that conform to this protocol. These classes contains three random sources and two random distributions.
The main difference between random sources and random distributions is
that distributions use a random source to produce values within a
specific range and can manipulate the random value output in various
other ways.
The aforementioned classes are provided by the framework so
that you can find the right balance between performance and randomness
for your game. Some random value generating algorithms are more complex
than others and consequently impact performance.
For example, if you need a random number generated every frame (sixty
times per second), then it would be best to use one of the faster
algorithms. In contrast, if you are only infrequently generating a
random value, you could use a more complex algorithm in order to produce
better results.
The three random source classes provided by the GameplayKit framework are GKARC4RandomSource, GKLinearCongruentialRandomSource, and GKMersenneTwisterRandomSource.
GKARC4RandomSource
This class uses the ARC4 algorithm and is suitable for most
purposes. This algorithm works by producing a series of random numbers
based on a seed. You can initialize a GKARC4RandomSource
with a specific seed if you need to replicate random behavior from
another part of your game. An existing source's seed can be retrieved
from its seed read-only property.
GKLinearCongruentialRandomSource
This random source class uses the basic linear congruential
generator algorithm. This algorithm is more efficient and performs
better than the ARC4 algorithm, but it also generates values that are
less random. You can fetch a GKLinearCongruentialRandomSource object's seed and create a new source with it in the same manner as a GKARC4RandomSource object.
GKMersenneTwisterRandomSource
This class uses the Mersenne Twister
algorithm and generates the most random results, but it is also the
least efficient. Just like the other two random source classes, you can
retrieve a GKMersenneTwisterRandomSource object's seed and use it to create a new source.
The two random distribution classes in GameplayKit are GKGaussianDistribution and GKShuffledDistribution.
GKGaussianDistribution
This distribution type ensures that the generated random values
follow a Gaussian distribution—also known as a normal distribution. This
means that the majority of the generated values will fall in the middle
of the range you specify.
For example, if you set up a GKGaussianDistribution object with a minimum value of 1, a maximum value of 10, and a standard deviation of 1, approximately 69% of the results would be either 4, 5, or 6. I will explain this distribution in more detail when we add one to our game later in this tutorial.
GKShuffledDistribution
This class can be used to make sure that random values are
uniformly distributed across the specified range. For example, if you
generate values between 1 and 10, and a 4 is generated, another 4 will not be generated until all of the other numbers between 1 and 10 have also been generated.
It's now time to put all this in practice. We are going to
be adding two random distributions to our game. Open your project in
Xcode and go to GameScene.swift. The first random distribution we'll add is a GKGaussianDistribution. Later, we'll also add a GKShuffledDistribution. Add the following two properties to the GameScene class.
var initialSpawnDistribution = GKGaussianDistribution(randomSource: GKARC4RandomSource(), lowestValue: 0, highestValue: 2)
var respawnDistribution = GKShuffledDistribution(randomSource: GKARC4RandomSource(), lowestValue: 0, highestValue: 2)
In this snippet, we create two distributions with a minimum value of 0 and a maximum value of 2. For the GKGaussianDistribution, the mean and deviation are automatically calculated according to the following equations:
mean = (maximum - minimum) / 2
deviation = (maximum - minimum) / 6
The mean of a Gaussian distribution is its midpoint and the
deviation is used to calculate what percentage of values should be
within a certain range from the mean. The percentage of values within a
certain range is:
68.27% within 1 deviation from the mean
95% within 2 deviations from the mean
100% within 3 deviations from the mean
This means that approximately 69% of the generated values
should be equal to 1. This will result in more red dots in proportion to
green and yellow dots. To make this work, we need to update the initialSpawn method.
In the for loop, replace the following line:
let respawnFactor = arc4random() % 3 // Will produce a value between 0 and 2 (inclusive)
with the following:
let respawnFactor = self.initialSpawnDistribution.nextInt()
The nextInt method can be called on any object that conforms to the GKRandom protocol and will return a random value based on the source and, if applicable, the distribution that you are using.
Build and run your app, and move around the map. You should see a lot more red dots in comparison to both green and yellow dots. The
second random distribution that we'll use in the game will come into
play when handling the rule system-based respawn behavior.
2. Rule Systems
GameplayKit rule systems are used to better organize
conditional logic within your game and also introduce fuzzy logic. By
introducing fuzzy logic, you can make entities within your game make
decisions based on a range of different rules and variables, such as
player health, current enemy count, and distance to the enemy. This can
be very advantageous when compared to simple if and switch statements.
Rule systems, represented by the GKRuleSystem class, have three key parts to them:
Agenda. This is the set of rules that have been added
to the rule system. By default, these rules are evaluated in the order
that they are added to the rule system. You can change the salience property of any rule to specify when you want it to be evaluated.
State Information. The state property of a GKRuleSystem
object is a dictionary, which you can add any data to, including custom
object types. This data can then be used by the rules of the rule
system when returning the result.
Facts. Facts within a rule system represent the
conclusions drawn from the evaluation of rules. A fact can also be
represented by any object type within your game. Each fact also has a
corresponding membership grade, which is a value between 0.0 and 1.0. This membership grade represents the inclusion or presence of the fact within the rule system.
Rules themselves, represented by the GKRule class, have two major components:
Predicate. This part of the rule returns a boolean
value, indicating whether or not the requirements of the rule have been
met. A rule's predicate can be created by using an NSPredicate object or, as we will do in this tutorial, a block of code.
Action. When the rule's predicate returns true,
it's action is executed. This action is a block of code where you can
perform any logic if the rule's requirements have been met. This is
where you generally assert (add) or retract (remove) facts within the
parent rule system.
Let's see how all this works in practice. For our rule system, we are going to create three rules that look at:
the distance from the spawn point to the player. If this
value is relatively small, we will make the game more likely to spawn
red enemies.
the current node count of the scene. If this is too high, we don't want any more dots being added to the scene.
whether or not a dot is already present at the spawn point. If there isn't, then we want to proceed to spawn a dot here.
First, add the following property to the GameScene class:
var ruleSystem = GKRuleSystem()
Next, add the following code snippet to the didMoveToView(_:) method:
let playerDistanceRule = GKRule(blockPredicate: { (system: GKRuleSystem) -> Bool in
if let value = system.state["spawnPoint"] as? NSValue {
let point = value.CGPointValue()
let xDistance = abs(point.x - self.playerNode.position.x)
let yDistance = abs(point.y - self.playerNode.position.y)
let totalDistance = sqrt((xDistance*xDistance) + (yDistance*yDistance))
if totalDistance <= 200 {
return true
} else {
return false
}
} else {
return false
}
}) { (system: GKRuleSystem) -> Void in
system.assertFact("spawnEnemy")
}
let nodeCountRule = GKRule(blockPredicate: { (system: GKRuleSystem) -> Bool in
if self.children.count <= 50 {
return true
} else {
return false
}
}) { (system: GKRuleSystem) -> Void in
system.assertFact("shouldSpawn", grade: 0.5)
}
let nodePresentRule = GKRule(blockPredicate: { (system: GKRuleSystem) -> Bool in
if let value = system.state["spawnPoint"] as? NSValue where self.nodesAtPoint(value.CGPointValue()).count == 0 {
return true
} else {
return false
}
}) { (system: GKRuleSystem) -> Void in
let grade = system.gradeForFact("shouldSpawn")
system.assertFact("shouldSpawn", grade: (grade + 0.5))
}
self.ruleSystem.addRulesFromArray([playerDistanceRule, nodeCountRule, nodePresentRule])
With this code, we create three GKRule
objects and add them to the rule system. The rules assert a particular
fact within their action block. If you do not provide a grade value and
just call the assertFact(_:) method, as we do with the playerDistanceRule, the fact is given a default grade of 1.0.
You will notice that for the nodeCountRule we only assert the "shouldSpawn" fact with a grade of 0.5. The nodePresentRule then asserts this same fact and adds on a grade value of 0.5. This is done so that when we check the fact later on, a grade value of 1.0 means that both rules have been satisfied.
You will also see that both the playerDistanceRule and nodePresentRule access the "spawnPoint" value of the rule system's state dictionary. We will assign this value before evaluating the rule system.
Finally, find and replace the respawn method in the GameScene class with the following implementation:
func respawn() {
let endNode = GKGraphNode2D(point: float2(x: 2048.0, y: 2048.0))
self.graph.connectNodeUsingObstacles(endNode)
for point in self.spawnPoints {
self.ruleSystem.reset()
self.ruleSystem.state["spawnPoint"] = NSValue(CGPoint: point)
self.ruleSystem.evaluate()
if self.ruleSystem.gradeForFact("shouldSpawn") == 1.0 {
var respawnFactor = self.respawnDistribution.nextInt()
if self.ruleSystem.gradeForFact("spawnEnemy") == 1.0 {
respawnFactor = self.initialSpawnDistribution.nextInt()
}
var node: SKShapeNode? = nil
switch respawnFactor {
case 0:
node = PointsNode(circleOfRadius: 25)
node!.physicsBody = SKPhysicsBody(circleOfRadius: 25)
node!.fillColor = UIColor.greenColor()
case 1:
node = RedEnemyNode(circleOfRadius: 75)
node!.physicsBody = SKPhysicsBody(circleOfRadius: 75)
node!.fillColor = UIColor.redColor()
case 2:
node = YellowEnemyNode(circleOfRadius: 50)
node!.physicsBody = SKPhysicsBody(circleOfRadius: 50)
node!.fillColor = UIColor.yellowColor()
default:
break
}
if let entity = node?.valueForKey("entity") as? GKEntity,
let agent = node?.valueForKey("agent") as? GKAgent2D where respawnFactor != 0 {
entity.addComponent(agent)
agent.delegate = node as? ContactNode
agent.position = float2(x: Float(point.x), y: Float(point.y))
agents.append(agent)
let startNode = GKGraphNode2D(point: agent.position)
self.graph.connectNodeUsingObstacles(startNode)
let pathNodes = self.graph.findPathFromNode(startNode, toNode: endNode) as! [GKGraphNode2D]
if !pathNodes.isEmpty {
let path = GKPath(graphNodes: pathNodes, radius: 1.0)
let followPath = GKGoal(toFollowPath: path, maxPredictionTime: 1.0, forward: true)
let stayOnPath = GKGoal(toStayOnPath: path, maxPredictionTime: 1.0)
let behavior = GKBehavior(goals: [followPath, stayOnPath])
agent.behavior = behavior
}
self.graph.removeNodes([startNode])
agent.mass = 0.01
agent.maxSpeed = 50
agent.maxAcceleration = 1000
}
node!.position = point
node!.strokeColor = UIColor.clearColor()
node!.physicsBody!.contactTestBitMask = 1
self.addChild(node!)
}
}
self.graph.removeNodes([endNode])
}
This method will be called once every second and is very similar to the initialSpawn method. There are a number of important differences in the for loop though.
We first reset the rule system by calling its reset
method. This needs to be done when a rule system is sequentially
evaluated. This removes all asserted facts and related data to ensure no
information is left over from the previous evaluation that might
interfere with the next.
We then assign the spawn point to the rule system's state dictionary. We use an NSValue object, because the CGPoint data type does not conform to Swift's AnyObject protocol and cannot be assigned to this NSMutableDictionary property.
We evaluate the rule system by calling its evaluate method.
We then retrieve the rule system's membership grade for the "shouldSpawn" fact. If this is equal to 1, we continue with respawning the dot.
Finally, we check the rule system's grade for the "spawnEnemy" fact and, if equal to 1, use the normally distributed random generator to create our spawnFactor.
The rest of the respawn method is the same as the initialSpawn
method. Build and run your game one final time. Even without moving
around, you will see new dots spawn when the necessary conditions are
met.
Conclusion
In this series on GameplayKit, you have learned a lot. Let's briefly summarize what we've covered.
Entities and Components
State Machines
Agents, Goals, and Behaviors
Pathfinding
Random Value Generators
Rule Systems
GameplayKit is an important addition to iOS 9 and OS X El
Capitan. It eliminates a lot of the complexities of game development. I
hope that this series has motivated you to experiment more with the
framework and discover what it is capable of.
As always, please be sure to leave your comments and feedback below.