Show posts from the last:

News stories from Thursday 19 January, 2017

Favicon for Symfony Blog 16:01 SymfonyLive Paris conference is coming, early bird registration ends on Friday! » Post from Symfony Blog Visit off-site link

The SymfonyLive Paris is the French edition of the SymfonyLive conference and therefore entirely held in French. The following blog post aims to explain the conference organization to the French community and thus is written in French.

Le SymfonyTour 2017 fait une nouvelle fois escale dans la ville lumière avec la conférence SymfonyLive Paris 2017 ! Organisé par SensioLabs, l’événement incontournable de la communauté francophone Symfony aura lieu les 30 et 31 mars 2017 à la Cité Internationale Universitaire de Paris.

Venez partager et échanger avec d’autres membres de la communauté et assistez à des talks pour vous permettre de découvrir les dernières nouveautés du framework et d’améliorer le développement de vos projets web !

Ne manquez pas les deux jours de formations pré-conférence les 28 et 29 mars, organisés dans les bureaux de SensioLabs. Nous vous proposons de composer vous-même votre combo formation, vous pouvez ainsi choisir un atelier par jour parmi les thèmes suivants :

Mardi 28 mars :

– Maîtriser le composant Workflow (proposé pour la première fois !)

– Développements front-end avec Webpack (proposé pour la première fois !)

Mercredi 29 mars :

– Développement d’API REST avec API Platform

– Maîtriser l’authentification avec Symfony Guard

– Analyse des performances des applications Symfony

Deux inscriptions possibles :

  • Inscription avec ticket Conférence seulement, vous donnant accès aux conférences des 30 et 31 Mars

  • Inscription package Formations + Conférences, pour profiter du SymfonyLive Paris 2017 au maximum ! Montez en compétences, puis échangez avec la communauté !

Inscrivez-vous dès maintenant aux formations et à la conférence en profitant des tarifs Early Bird JUSQU’AU 20 JANVIER INCLUS, prix mini garantis (-20% sur le Package Formations + Conférences) !

Vous souhaitez prendre part à la conférence ? Venez partager votre expérience sur scène en devenant speaker de la conférence ! N’attendez plus, notre Call For Papers est ouvert jusqu’au 27 janvier ! Envoyez-nous vos propositions de sujets dès aujourd’hui, et n’hésitez pas à nous en proposer plusieurs afin d’augmenter vos chances d’être sélectionné.

Toutes les modalités du CFP sont en ligne, et n’oubliez pas, la conférence est un événement français, il faut donc soumettre vos propositions en français uniquement !

Prêt à vivre une expérience enrichissante, mémorable et à faire de nouvelles rencontres ? Alors rejoignez-nous au SymfonyLive Paris 2017 !


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris

News stories from Tuesday 17 January, 2017

Favicon for A List Apart: The Full Feed 16:00 Guerrilla Innovation » Post from A List Apart: The Full Feed Visit off-site link

In a culture like Google’s, having paid time to innovate is celebrated. But most of us don’t work at Google; most of us work at places that are less than thrilled when someone has a bright new idea that will be amazing.

After all, who has time to try new things when the things we’re doing now aren’t broken? No one wants to be forced to use another app, to have yet another thing they are expected to log into, only to see it die out in six months.

So how do you push an idea through? How can you innovate if you work in a less-than-innovative place?

It takes more than a big idea

Let’s say you just saw a demo of someone using a prototyping tool like UXPin and you’ve got this big vision of your team incorporating it into your development process. With a tool like this, you realize, you can quickly put some concepts together for a website and make it real enough to do user testing within two days! Seems pretty invaluable. Why haven’t we been using this all along?

You create an account and start exploring. It’s pretty damn awesome. You put a demo together to share with your team at your next meeting.

Your excitement is completely drained within five minutes.

“Seems like a lot of extra work.”

“Why would we create a prototype just to rewrite it all in code?”

“Let’s just build it Drupal.”

Knife. In. Heart.

You can see the value in the product, but you didn’t take the necessary steps to frame the problem you want to solve. You didn’t actually use this exciting new tool to build a case around the value it will have for your company.

So right now, to your coworkers, this is just another shiny object. In the web development world, a new shiny object comes along every couple seconds. You need to do some legwork upfront to understand the difference between what shiny object is worth your team’s time and what is, well, just another shiny object.

Anyone can come up with an idea on the fly or think they’re having an Oprah Aha! Moment, but real innovation takes hours of work, trying and failing over and over, a serious amount of determination, and some stealth guerrilla tactics.

Frame the problem

The first step in guerilla innovation is making sure you’re solving the right problem. Just because your idea genuinely is amazing doesn’t mean it will provide genuine value. If it doesn’t solve a tangible problem or provide some sort of tangible benefit, you have little or no chance of getting your team and your company to buy into your idea.

Coolness alone isn’t enough. And “cool” is always up for interpretation.

Framing the problem allows you to look at it from many different angles and see different solutions that may not have occurred to you.

By diving deep into the impact and effects your idea will have, you will start to see the larger picture and may even decide your idea wasn’t so amazing after all. Or, this discovery could lead you to a different solution that truly is innovative and life-changing.

Start at the end

When your idea is implemented and everything goes as planned, what benefit will it provide?

Make a list of people who would theoretically benefit from this idea. Write down who they are and how the idea would help them.

Let’s go back to our prototyping tool example. Who would benefit from it the most? The end user looking for specific content on your website. Using a prototyping tool would allow you to do more user testing earlier in the process, letting you tweak and iterate your design based on feedback that could improve the overall site experience. An improved experience would, ideally, allow visitors to find the content they are looking for more easily; the content would therefore be more useful and usable for them.

If visitors have a better experience, that could result in a better conversion rate—which in turn would help your manager’s goals as web sales improve.

That benefit could extend to your team as a whole, too: a prototyping tool could improve communication between the marketing group and the development group. Using a prototyping tool would help quickly visualize ideas so that everyone can see how the site is evolving. Questions could be asked and addressed sooner. A prototyping tool could be just the thing you need to get everyone on the same page about content and identified goals.

Identify your target audience(s)

The top two audiences with the potential to get the most benefit from your innovative idea are your target audiences. If the end user of the website will receive the most benefit, then that is your primary target audience. If your manager receives a benefit as a result, then that is your secondary target audience.

Take some time to develop a persona around each of your top target audiences. A persona is a document that summarizes research trends and data that have been collected about a key audience segment. Although a persona depicts a single person, it should never be based on one real individual; rather, it’s an amalgam of characteristics from many people in the real world. A persona is usually one page and includes characteristics such as attitude, goals, skill level, occupation, and background. For more on developing personas to improve user experience, check out Usability.gov.

When you’re waist deep in this idea in the next six months and your coworkers are complaining about the extra workload, and you’re wondering why you ever decided to do this you will look at your white board where you have your personas displayed and you will remember they are your target audience, not you. All of this extra work is for their benefit.

As you implement a workflow using a prototyping tool and the decision gets made to only do only one round of user testing instead of the three rounds that were initially discussed, you can reference your personas and ask who stands to benefit from that decision. Are you just saving time for the developers and the stakeholders in an attempt to pump out websites faster? Or will this really benefit the target audience?

Do a pre-postmortem

Understanding the risks of innovation does not mean backing away from your idea and giving up. When you understand the obstacles in front of you, you can more easily identify them and develop solutions before potential failures take place.

One useful exercise is to do a postmortem report even before you begin. Start anticipating the reasons the tool or project will fail so you can avoid those pitfalls. Some questions you might ask in a postmortem:

  • Who was involved in the project?
  • What went well with the project?
  • What did not go well?
  • What can we do next time to improve our results?

With our prototyping example, a possible reason for failure might be the team not adopting the tool and it never gaining traction. You need the team to be on the same page and using the same workflow; lack of adoption could be detrimental to progress.

Analyze your current situation

What sorts of effects are you seeing right now because of this identified problem? Gather some data to prove there is an actual problem that needs to be addressed. If your help desk continually receives calls about users unable to find a specific button on your website, for example, then you have some evidence of a bad user experience.

Do some research

Ask your coworkers what they know about prototyping. Ask if they have ever experimented with any prototyping tools.

Ask your end users about the content on your site. Gather some information about just how bad the user experience really is.

This is not the time to pitch your idea. You are in complete listening/observation mode. Save the elevator pitch for later, when you have all the information and are confident this is the right solution to a very specific problem and you are prepared to answer the questions that will come.

Assess your tools

Are there any tools you use now that are similar to the tool you are proposing? If so, what are their benefits and downfalls?

Take the UXPin example. Does your team use paper to do prototypes right now? Does the graphic designer use Photoshop to start with wireframes/prototypes before doing a high-res layout?

Having a ready list of pros and cons for the tools you currently use will help you build a case around why your solution is superior and will show that you’ve done your homework.

Check your ego

Scrutinize your motivations for wanting to introduce a new tool. Do you want to try something new just to take control of a situation? If the graphic designer does a fine job using Photoshop to develop a prototype but you don’t know how to use Photoshop, that’s not a great reason to try a new tool.

However, if you have a team of six and only one person knows how to use Photoshop, choosing a more accessible tool with a shorter learning curve could be the right move.

Explore other solutions

Are there other tools out there that will solve the problem you discovered?
If you don’t yet have room in the budget for UXPin, can something else get you by while you prove the value of this type of tool? can you use paper prototypes for a few months while the team adjusts to this new part of their workflow?

Sometimes starting with something less complex can be beneficial. Anyone can use pen and paper, but learning new software can be daunting and time-consuming.

Still think this is an awesome idea?

You now understand the tangible benefits of implementing your innovative idea and you know who stands to gain from it. You can foresee both the rewards of implementing it and the potential risks of not implementing it.

Your motives are good, you’ve analyzed your current situation for similar tools or processes that may already be in place, and you’ve explored other potential solutions. You are well on your way to building a strong case around your innovative idea. At this point, you’ve put a lot of time and effort into developing it. Do you still think it’s a good idea, and are you as excited as you were when you started?

If you’ve lost your drive and excitement at this point, or have been unable to visualize any real benefit, the idea may not be worth implementing. That’s okay. The way you will land on a really great idea is by testing many not-so-great ideas until you find one that fits.

Your continued excitement and drive will be necessary as you start to implement your idea and work toward gaining supporters.

Start small and fail as soon as possible

Even if you’re still quite sure this idea is amazing, start small and keep an open mind. A thousand questions will come to mind as you begin using an actual product with real users.

As you start running a couple of tests, use language like “experiment” instead of “implementation.” This leaves room for error and growth. You want to know what’s not going to work as much as you want to know what is going to work. And if someone asks what you’re doing, it sounds way more innocent if you say you’re running a few experiments that you’re going to share with the team than if you say you’re implementing a prototyping tool into our web development process.

If you’re working on a current website project, try creating just one page using the prototyping tool on your own time, not as a part of the official project process. See how it goes building just one page for now. Even better, try making just one element of the page, like the header or navigation. By starting small you will have fewer variables to take into consideration. Remember, right now you’re evaluating the tool itself, not necessarily the user experience of your website.

Then take your prototype and see what kind of feedback you can get by testing it with real end users.

Is the prototype responsive? What URL did you need to use to access it? Was it easy to direct users to this URL? Can you record mouse movements or clicks, and do you need to? How are you documenting their feedback to the site? Were they able to use their own device, or did you need to provide it? What are you going to do with the feedback and observations you’ve gained?

Do several tiny experiments like this, making adjustments as you go, until you’re more comfortable with the tool, its features, and the results you get from it. Your confidence with the tool will give your team confidence with it as well.

Don’t get fired

Most companies don’t mind their employees doing research about their work on company time. Unfortunately, some do mind. Using your own device on your lunch hour or before and after work may be your only option.

Even if your job does allow you to research and learn on the clock, be respectful of time. Spending several months straight iterating on one idea might not be good for your next employee review. 3M designates 15 percent time for employees to focus on innovation; Google has famously allowed up to 20 percent of employee time to focus on new innovative ideas. Try to gauge what percentage of time you could reasonably spend on your research without neglecting your real job.

Be transparent about what you’re doing. Hiding it and sneaking around will give the wrong impression. Let your boss know you’re curious about a new tool and you’re just running a few experiments to explore it more. Curious, experiment, explore—as I suggested earlier, these are all safe words implying no level of commitment or pressure.

Win allies

Presumably you have a few friends in the office; take them out to lunch and toss them the idea. Let them know about the experiments you’re running and the results you’re getting. Ask if they want to see what you’ve been working on.

It might take a while for anyone to show some interest. Don’t give up if your excitement isn’t mirrored immediately and don’t be pushy. Remember, you want your colleagues to be in your corner.

Also, bouncing your idea off your coworkers is great practice for telling your boss. Your coworkers will definitely ask you a bunch of questions you haven’t thought of yet and will express viewpoints you haven’t considered.

Listen to their opposition and use their concerns to build your case. Do they think adding a new tool to the workflow will slow down the process? Explore that concern; next time you talk, offer some data and insight about how that assumption might not be true.

Having your team on your side will go a long way when presenting this to your boss, but it doesn’t have to be a deal-breaker if they’re not. Sometimes our coworkers are just so scared of change that no amount of data will make them comfortable. They will likely express their concerns when you bring your idea up in front of the boss; having a prepared response makes you look confident.

Get your boss’ support

Time to go up a level. Please do not put together a giant presentation, wear your best power suite, and pour your heart out onto the line. If your experience is anything like mine, you’ll just spend the rest of the day crying off and on in the bathroom.

A definitive, polished presentation can be offputting. It makes you look like you’ve already solved the whole problem. You want to appear open to suggestions—because you are.

The approach

You know your relationship with your boss, and how to approach them, better than anyone else. For me, the best way is to wait for the right opening and mention the new idea in passing. Be prepared to show all of your progress and make some sort of proposal right on the spot. Make it seem easy and low-risk, with clear next steps. I’ve found it beneficial to address the concerns of your team up front to show you value their opinion and input. Bosses love teamwork.

If there isn’t clear interest from your boss, ask them what other data or information they would like to see to help support this idea. What are their concerns or hesitations?

At this point, consider asking for permission to continue to experiment on a broader level. The word “implement” really freaks people out. Trying a prototyping tool in the web-development process for three months instead of implementing it forever sounds a lot less risky.

Persevere

If you can’t stick with your idea long enough to do some research and run some experiments, why should anyone else? If it truly matters to you and you can see your idea making a real change in your company or within your work environment, hang in there for the long haul.

When the graphic designers agree to use UXpin as a prototyping tool and the User Experience team (if you’re lucky enough to have a UX team, really I’m not jealous) says they will give it a try for end user testing, ask to be a part of their process. Ask them to invite you to the end-user testing sessions and the design reviews with the stakeholders.

Be in those sessions and meetings as the the idea is implemented so you can continue to reference your personas and make sure decisions are made for the right reasons. That way, you’ll be in the front row to see positive change happen as you guide your idea and hard work into something truly innovative.

As your idea starts to gain traction and your experiments turn into a real process—see things through. Don’t just hand off your idea and hope for the best like a child waiting for the school bus. Drive the damn bus.

 

News stories from Sunday 15 January, 2017

Favicon for Symfony Blog 10:30 A week of symfony #524 (9-15 January 2017) » Post from Symfony Blog Visit off-site link

This week Symfony published the 2.7.23, 2.8.16, 3.1.9 and 3.2.2 maintenance versions. In addition, the upcoming Symfony 3.3 version added a File\Stream class for size-unknown BinaryFileResponse, allowed to reference files directly from kernel.root_dir, added a new Dotenv component and deprecated case-insensitive service identifiers.

Symfony development highlights

2.7 changelog:

  • e355739: [DependencyInjection] don't share service when no id provided
  • f03073c: [TwigBundle] fixed regression in TwigEngine exception handling
  • f83ad56: [Validator] respect groups when merging constraints
  • 1c6dfce: [FrameworkBundle] fixed relative paths used as cache keys
  • 17ce5f5: [TwigBundle] fixed bug where namespaced paths don't take parent bundles in account
  • 4769ca2: [Validator] fixed caching of constraints derived from non-serializable parents
  • d7bc68a: [FrameworkBundle] fixed IPv6 address handling in server commands
  • 5cf600d: [Form] DateTimeToLocalizedStringTransformer does not use timezone when using date only
  • c18c93b: [Yaml] fixed Yaml parsing for very long quoted strings

3.1 changelog:

  • 5c68c69: [Ldap] always have a valid connection when using the EntryManager
  • cbb5332: [Cache] used strpbrk() instead of strcspn() is faster

3.2 changelog:

  • 75de5eb: [FrameworkBundle] fixed class_exists() checks in PhpArrayAdapter-related cache warmers
  • 6ba7981: [WebProfilerBundle] fixed form profiler errors profiler_dump

Master changelog:

  • e6bd47e: [SecurityBundle] removed usage of the templating component
  • 5f4ba31: [DependencyInjection] allowed ~ instead of {} for services in Yaml
  • e66e6af: [HttpFoundation] added File\Stream for size-unknown BinaryFileResponse
  • 9e6d6ba: [DependencyInjection] fixed aliases visibility with and without defaults
  • 4d916c6: [DependencyInjection] allowed to explicit type to load in FileLoaders
  • 629de96: [FrameworkBundle] allowed to dump extension config reference sub-path
  • bdd0f9d: [FrameworkBundle] allow to reference files directly from kernel.root_dir
  • cbecfc3: [Form] removed unused ResolvedTypeFactory in FormFactory constructor
  • 294a877: [Console] moved AddConsoleCommandPass from FrameworkBundle to Console
  • df876b3: [Form] allow to configure labels for DateIntervalType and enhanced its form theme
  • 7aeb31e: [DependencyInjection] deprecated case insentivity of service identifiers
  • a995383: [FrameworkBundle] added a --show-arguments flag to the debug:container command
  • 9089131: added a new Dotenv component
  • 42c3d4f: [Cache] relaxed binary-constraint on Memcached connections

Newest issues and pull requests

Twig development highlights

Master changelog:

  • d192cdc: delayed marking the environment as initialized until it is done
  • bc6a913: added Twig_NodeCaptureInterface for nodes that capture all output
  • feb9bf5: turned fatal error into exception when a previously generated cache prevents loading its newly compiled version
  • 0f2cbf5: do not overridde case-insentive cache entries

Silex development highlights

Master changelog:

  • 02ba1df: added the FormRegistry as a service to enable the extension point

They talked about us


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris

News stories from Friday 13 January, 2017

Favicon for Symfony Blog 10:21 New in Symfony 3.3: Dotenv component » Post from Symfony Blog Visit off-site link

Contributed by
Fabien Potencier
in #21234.

A common practice when developing applications is to store some configuration options as environment variables in a .env file (pronounced "dot-env"). You can already use this technique in Symfony applications, but in Symfony 3.3 we've decided to make it a built-in feature thanks to the new Dotenv component.

In practice, the Dotenv component parses .env files to make environment variables stored in them accessible in your application via getenv(), $_ENV or $_SERVER. If your .env file contains these variables:

1
2
DB_USER=root
DB_PASS=pass

The following code will parse them and turn them into environment variables:

1
2
3
use Symfony\Component\Dotenv\Dotenv;

(new Dotenv())->load(__DIR__.'/.env');

Now you can get the database password in your application as follows:

1
$dbPassword = getenv('DB_PASS');

In addition to loading the variables, you can just parse them because the component defines three stages: load, parse, and populate.

Before creating a new component, we reviewed the existing libraries that provide similar features, but none of them matched our specific set of requirements:

  • Variables should not be validated in any way (because the real environment variables can only be strings and you can't validate them).
  • The component provides a strict implementation of what you can do in a real bash shell script and nothing more: $VAR and ${VAR} are supported, you can concatenate strings, execute commands and store the result in a variable, etc.
  • Superb error messages to easily spot any issue.
  • Clean and minimal API, without unneeded abstractions like being able to add an environment variable directly (just use putenv()).

Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris

News stories from Thursday 12 January, 2017

Favicon for Symfony Blog 22:53 Symfony 3.2.2 released » Post from Symfony Blog Visit off-site link

Symfony 3.2.2 has just been released. Here is a list of the most important changes:

  • bug #21257 [Profiler][Form] Fix form profiler errors profiler_dump (ogizanagi)
  • bug #21243 [FrameworkBundle] Fix class_exists() checks in PhpArrayAdapter-related cache warmers (nicolas-grekas, mpajunen)
  • bug #21218 [Form] DateTimeToLocalizedStringTransformer does not use timezone when using date only (magnetik)
  • bug #20605 [Ldap] Always have a valid connection when using the EntryManager (bobvandevijver)
  • bug #21104 [FrameworkBundle] fix IPv6 address handling in server commands (xabbuh)
  • bug #20793 [Validator] Fix caching of constraints derived from non-serializable parents (uwej711)
  • bug #19586 [TwigBundle] Fix bug where namespaced paths don't take parent bundles in account (wesleylancel)
  • bug #21237 [FrameworkBundle] Fix relative paths used as cache keys (nicolas-grekas)
  • bug #21183 [Validator] respect groups when merging constraints (xabbuh)
  • bug #21179 [TwigBundle] Fixing regression in TwigEngine exception handling (Bertalan Attila)
  • bug #21220 [DI] Fix missing new line after private alias (ogizanagi)
  • bug #21211 Classloader tmpname (lyrixx)
  • bug #21205 [TwigBundle] fixed usage when Templating is not installed (fabpot)
  • bug #21155 [Validator] Check cascasdedGroups for being countable (scaytrase)
  • bug #21200 [Filesystem] Check that directory is writable after created it in dumpFile() (chalasr)
  • bug #21186 [Bridge/PhpUnit] Relax expectedDeprecation for forward compat (nicolas-grekas)
  • bug #21184 [FrameworkBundle] Remove Response* from classes to compile (nicolas-grekas)
  • bug #21165 [Serializer] int is valid when float is expected when deserializing JSON (dunglas)
  • bug #21167 [Cache] Remove silenced warning tiggered by PhpArrayAdapter (nicolas-grekas)
  • bug #21166 [Cache] Fix order of writes in ChainAdapter (nicolas-grekas)
  • bug #21113 [FrameworkBundle][HttpKernel] Fix resources loading for bundles with custom structure (chalasr)
  • bug #20995 [DependencyInjection] Fix the priority order of compiler pass trait (francoispluchino)
  • bug #21084 [Yaml] handle empty lines inside unindented collection (xabbuh)
  • bug #21143 [PhpUnitBridge] Set COMPOSER_ROOT_VERSION while installing (nicolas-grekas)
  • bug #20925 [HttpFoundation] Validate/cast cookie expire time (ro0NL)
  • bug #21138 [PhpUnitBridge] skip tests with failure and error states too (xabbuh)
  • bug #21135 [PhpUnitBridge] hide stack trace of expected deprecation failures (xabbuh)
  • bug #21117 [Yaml] add missing indicator character (xabbuh)
  • bug #21121 [PhpUnitBridge] respect skipped and incomplete tests (xabbuh)
  • bug #21032 [SecurityBundle] Made collection of user provider unique when injecting them to the RemberMeService (lyrixx)
  • bug #21078 [Console] Escape default value when dumping help (lyrixx)
  • bug #21076 [Console] OS X Can't call cli_set_process_title php without superuser (ogizanagi)
  • bug #20900 [Console] Descriptors should use Helper::strlen (ogizanagi)
  • bug #21025 [Cache] remove is_writable check on filesystem cache (4rthem)
  • bug #21064 [Debug] Wrap call to ->log in a try catch block (lyrixx)
  • bug #21069 [Debug] Fixed cast of stream (lyrixx)
  • bug #21010 [Debug] UndefinedMethodFatalErrorHandler - Handle anonymous classes (SpacePossum)
  • bug #20991 [cache] Bump RedisAdapter default timeout to 5s (Nicofuma)
  • bug #20959 [FrameworkBundle] Ignore AnnotationException exceptions in the AnnotationsCacheWarmer (fancyweb)
  • bug #20795 [FrameworkBundle] Allow multiple transitions with the same name (Padam87)
  • bug #20859 Avoid warning in PHP 7.2 because of non-countable data (wouterj)
  • bug #21053 [Validator] override property constraints in child class (xabbuh)
  • bug #21034 [FrameworkBundle] Make TemplateController working without the Templating component (dunglas)
  • bug #20970 [Console] Fix question formatting using SymfonyStyle::ask() (chalasr, ogizanagi)
  • bug #20999 [HttpKernel] Continuation of #20599 for 3.1 (ro0NL)
  • bug #20975 [Form] fix group sequence based validation (xabbuh)
  • bug #20599 [WebProfilerBundle] Display multiple HTTP headers in WDT (ro0NL)
  • bug #20799 [TwigBundle] do not try to register incomplete definitions (xabbuh)
  • bug #20961 [Validator] phpize default option values (xabbuh)
  • bug #20934 [FrameworkBundle] Fix PHP form templates on translatable attributes (ro0NL)
  • bug #20957 [FrameworkBundle] test for the Validator component to be present (xabbuh)
  • bug #20936 [DependencyInjection] Fix on-invalid attribute type in xsd (ogizanagi)
  • bug #20931 [VarDumper] Fix dumping by-ref variadics (nicolas-grekas)
  • bug #20749 [FrameworkBundle] Smarter default for framework.annotations (ogizanagi)
  • bug #20734 [Security] AbstractVoter->supportsAttribute gives false positive if attribute is zero (0) (martynas-foodpanda)
  • bug #14082 [config] Fix issue when key removed and left value only (zerustech)
  • bug #20910 [HttpFoundation] Fix cookie to string conversion for raw cookies (ro0NL)
  • bug #20909 Fix misresolved parameters in debug:config on 3.2 (chalasr)
  • bug #20904 [TwigBundle] Config is now a hard dependency (dunglas)
  • bug #20847 [Console] fixed BC issue with static closures (araines)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris
Favicon for Symfony Blog 22:35 Symfony 3.1.9 released » Post from Symfony Blog Visit off-site link

Symfony 3.1.9 has just been released. Here is a list of the most important changes:

  • bug #21218 [Form] DateTimeToLocalizedStringTransformer does not use timezone when using date only (magnetik)
  • bug #20605 [Ldap] Always have a valid connection when using the EntryManager (bobvandevijver)
  • bug #21104 [FrameworkBundle] fix IPv6 address handling in server commands (xabbuh)
  • bug #20793 [Validator] Fix caching of constraints derived from non-serializable parents (uwej711)
  • bug #19586 [TwigBundle] Fix bug where namespaced paths don't take parent bundles in account (wesleylancel)
  • bug #21237 [FrameworkBundle] Fix relative paths used as cache keys (nicolas-grekas)
  • bug #21183 [Validator] respect groups when merging constraints (xabbuh)
  • bug #21179 [TwigBundle] Fixing regression in TwigEngine exception handling (Bertalan Attila)
  • bug #21220 [DI] Fix missing new line after private alias (ogizanagi)
  • bug #21211 Classloader tmpname (lyrixx)
  • bug #21205 [TwigBundle] fixed usage when Templating is not installed (fabpot)
  • bug #21155 [Validator] Check cascasdedGroups for being countable (scaytrase)
  • bug #21200 [Filesystem] Check that directory is writable after created it in dumpFile() (chalasr)
  • bug #21165 [Serializer] int is valid when float is expected when deserializing JSON (dunglas)
  • bug #21166 [Cache] Fix order of writes in ChainAdapter (nicolas-grekas)
  • bug #21113 [FrameworkBundle][HttpKernel] Fix resources loading for bundles with custom structure (chalasr)
  • bug #21084 [Yaml] handle empty lines inside unindented collection (xabbuh)
  • bug #20925 [HttpFoundation] Validate/cast cookie expire time (ro0NL)
  • bug #21032 [SecurityBundle] Made collection of user provider unique when injecting them to the RemberMeService (lyrixx)
  • bug #21078 [Console] Escape default value when dumping help (lyrixx)
  • bug #21076 [Console] OS X Can't call cli_set_process_title php without superuser (ogizanagi)
  • bug #20900 [Console] Descriptors should use Helper::strlen (ogizanagi)
  • bug #21025 [Cache] remove is_writable check on filesystem cache (4rthem)
  • bug #21064 [Debug] Wrap call to ->log in a try catch block (lyrixx)
  • bug #21010 [Debug] UndefinedMethodFatalErrorHandler - Handle anonymous classes (SpacePossum)
  • bug #20991 [cache] Bump RedisAdapter default timeout to 5s (Nicofuma)
  • bug #20859 Avoid warning in PHP 7.2 because of non-countable data (wouterj)
  • bug #21053 [Validator] override property constraints in child class (xabbuh)
  • bug #21034 [FrameworkBundle] Make TemplateController working without the Templating component (dunglas)
  • bug #20970 [Console] Fix question formatting using SymfonyStyle::ask() (chalasr, ogizanagi)
  • bug #20999 [HttpKernel] Continuation of #20599 for 3.1 (ro0NL)
  • bug #20975 [Form] fix group sequence based validation (xabbuh)
  • bug #20599 [WebProfilerBundle] Display multiple HTTP headers in WDT (ro0NL)
  • bug #20799 [TwigBundle] do not try to register incomplete definitions (xabbuh)
  • bug #20961 [Validator] phpize default option values (xabbuh)
  • bug #20934 [FrameworkBundle] Fix PHP form templates on translatable attributes (ro0NL)
  • bug #20957 [FrameworkBundle] test for the Validator component to be present (xabbuh)
  • bug #20936 [DependencyInjection] Fix on-invalid attribute type in xsd (ogizanagi)
  • bug #20931 [VarDumper] Fix dumping by-ref variadics (nicolas-grekas)
  • bug #20734 [Security] AbstractVoter->supportsAttribute gives false positive if attribute is zero (0) (martynas-foodpanda)
  • bug #14082 [config] Fix issue when key removed and left value only (zerustech)
  • bug #20910 [HttpFoundation] Fix cookie to string conversion for raw cookies (ro0NL)
  • bug #20847 [Console] fixed BC issue with static closures (araines)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris
Favicon for Symfony Blog 21:42 Symfony 2.8.16 released » Post from Symfony Blog Visit off-site link

Symfony 2.8.16 has just been released. Here is a list of the most important changes:

  • bug #21218 [Form] DateTimeToLocalizedStringTransformer does not use timezone when using date only (magnetik)
  • bug #21104 [FrameworkBundle] fix IPv6 address handling in server commands (xabbuh)
  • bug #20793 [Validator] Fix caching of constraints derived from non-serializable parents (uwej711)
  • bug #19586 [TwigBundle] Fix bug where namespaced paths don't take parent bundles in account (wesleylancel)
  • bug #21237 [FrameworkBundle] Fix relative paths used as cache keys (nicolas-grekas)
  • bug #21183 [Validator] respect groups when merging constraints (xabbuh)
  • bug #21179 [TwigBundle] Fixing regression in TwigEngine exception handling (Bertalan Attila)
  • bug #21220 [DI] Fix missing new line after private alias (ogizanagi)
  • bug #21211 Classloader tmpname (lyrixx)
  • bug #21205 [TwigBundle] fixed usage when Templating is not installed (fabpot)
  • bug #21155 [Validator] Check cascasdedGroups for being countable (scaytrase)
  • bug #21200 [Filesystem] Check that directory is writable after created it in dumpFile() (chalasr)
  • bug #21113 [FrameworkBundle][HttpKernel] Fix resources loading for bundles with custom structure (chalasr)
  • bug #21084 [Yaml] handle empty lines inside unindented collection (xabbuh)
  • bug #20925 [HttpFoundation] Validate/cast cookie expire time (ro0NL)
  • bug #21032 [SecurityBundle] Made collection of user provider unique when injecting them to the RemberMeService (lyrixx)
  • bug #21078 [Console] Escape default value when dumping help (lyrixx)
  • bug #21076 [Console] OS X Can't call cli_set_process_title php without superuser (ogizanagi)
  • bug #20900 [Console] Descriptors should use Helper::strlen (ogizanagi)
  • bug #21064 [Debug] Wrap call to ->log in a try catch block (lyrixx)
  • bug #21010 [Debug] UndefinedMethodFatalErrorHandler - Handle anonymous classes (SpacePossum)
  • bug #20859 Avoid warning in PHP 7.2 because of non-countable data (wouterj)
  • bug #21053 [Validator] override property constraints in child class (xabbuh)
  • bug #21034 [FrameworkBundle] Make TemplateController working without the Templating component (dunglas)
  • bug #20970 [Console] Fix question formatting using SymfonyStyle::ask() (chalasr, ogizanagi)
  • bug #20975 [Form] fix group sequence based validation (xabbuh)
  • bug #20599 [WebProfilerBundle] Display multiple HTTP headers in WDT (ro0NL)
  • bug #20799 [TwigBundle] do not try to register incomplete definitions (xabbuh)
  • bug #20961 [Validator] phpize default option values (xabbuh)
  • bug #20934 [FrameworkBundle] Fix PHP form templates on translatable attributes (ro0NL)
  • bug #20957 [FrameworkBundle] test for the Validator component to be present (xabbuh)
  • bug #20936 [DependencyInjection] Fix on-invalid attribute type in xsd (ogizanagi)
  • bug #20931 [VarDumper] Fix dumping by-ref variadics (nicolas-grekas)
  • bug #20734 [Security] AbstractVoter->supportsAttribute gives false positive if attribute is zero (0) (martynas-foodpanda)
  • bug #14082 [config] Fix issue when key removed and left value only (zerustech)
  • bug #20847 [Console] fixed BC issue with static closures (araines)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris
Favicon for Symfony Blog 21:25 Symfony 2.7.23 released » Post from Symfony Blog Visit off-site link

Symfony 2.7.23 has just been released. Here is a list of the most important changes:

  • bug #21218 [Form] DateTimeToLocalizedStringTransformer does not use timezone when using date only (magnetik)
  • bug #21104 [FrameworkBundle] fix IPv6 address handling in server commands (xabbuh)
  • bug #20793 [Validator] Fix caching of constraints derived from non-serializable parents (uwej711)
  • bug #19586 [TwigBundle] Fix bug where namespaced paths don't take parent bundles in account (wesleylancel)
  • bug #21237 [FrameworkBundle] Fix relative paths used as cache keys (nicolas-grekas)
  • bug #21183 [Validator] respect groups when merging constraints (xabbuh)
  • bug #21179 [TwigBundle] Fixing regression in TwigEngine exception handling (Bertalan Attila)
  • bug #21220 [DI] Fix missing new line after private alias (ogizanagi)
  • bug #21211 Classloader tmpname (lyrixx)
  • bug #21205 [TwigBundle] fixed usage when Templating is not installed (fabpot)
  • bug #21155 [Validator] Check cascasdedGroups for being countable (scaytrase)
  • bug #21200 [Filesystem] Check that directory is writable after created it in dumpFile() (chalasr)
  • bug #21113 [FrameworkBundle][HttpKernel] Fix resources loading for bundles with custom structure (chalasr)
  • bug #21084 [Yaml] handle empty lines inside unindented collection (xabbuh)
  • bug #20925 [HttpFoundation] Validate/cast cookie expire time (ro0NL)
  • bug #21032 [SecurityBundle] Made collection of user provider unique when injecting them to the RemberMeService (lyrixx)
  • bug #21078 [Console] Escape default value when dumping help (lyrixx)
  • bug #21076 [Console] OS X Can't call cli_set_process_title php without superuser (ogizanagi)
  • bug #20900 [Console] Descriptors should use Helper::strlen (ogizanagi)
  • bug #21064 [Debug] Wrap call to ->log in a try catch block (lyrixx)
  • bug #21010 [Debug] UndefinedMethodFatalErrorHandler - Handle anonymous classes (SpacePossum)
  • bug #20859 Avoid warning in PHP 7.2 because of non-countable data (wouterj)
  • bug #21053 [Validator] override property constraints in child class (xabbuh)
  • bug #20970 [Console] Fix question formatting using SymfonyStyle::ask() (chalasr, ogizanagi)
  • bug #20975 [Form] fix group sequence based validation (xabbuh)
  • bug #20599 [WebProfilerBundle] Display multiple HTTP headers in WDT (ro0NL)
  • bug #20799 [TwigBundle] do not try to register incomplete definitions (xabbuh)
  • bug #20961 [Validator] phpize default option values (xabbuh)
  • bug #20934 [FrameworkBundle] Fix PHP form templates on translatable attributes (ro0NL)
  • bug #20957 [FrameworkBundle] test for the Validator component to be present (xabbuh)
  • bug #20936 [DependencyInjection] Fix on-invalid attribute type in xsd (ogizanagi)
  • bug #20931 [VarDumper] Fix dumping by-ref variadics (nicolas-grekas)
  • bug #20734 [Security] AbstractVoter->supportsAttribute gives false positive if attribute is zero (0) (martynas-foodpanda)
  • bug #14082 [config] Fix issue when key removed and left value only (zerustech)

Want to upgrade to this new release? Fortunately, because Symfony protects backwards-compatibility very closely, this should be quite easy. Read our upgrade documentation to learn more.

Want to check the integrity of this new version? Read my blog post about signing releases .

Want to be notified whenever a new Symfony release is published? Or when a version is not maintained anymore? Or only when a security issue is fixed? Consider subscribing to the Symfony Roadmap Notifications.


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris
Favicon for A List Apart: The Full Feed 16:00 A Dao of Product Design » Post from A List Apart: The Full Feed Visit off-site link

When a designer or developer sets out to create a new product, the audience is thought of as “the user”: we consider how she might use it, what aspects make it accessible and usable, what emotional interactions make it delightful, and how we can optimize the workflow for her and our benefit. What is rarely considered in the process is the social and societal impact of our product being used by hundreds of thousands—even millions—of people every day.

What a product does to people psychologically, or how it has the power to transform our society, is hard to measure but increasingly important. Good products improve how people accomplish tasks; great products improve how society operates. If we don’t practice a more sustainable form of product design, we risk harmful side effects to people and society that could have been avoided.

The impact of product design decisions

In 1956, President Eisenhower signed the U.S. Interstate Highway Act into law. Inspired by Germany’s Reichsautobahnen, Eisenhower was determined to develop the cross-country highways that lawmakers had been discussing for years.

During the design of this interstate network, these “open roads of freedom” were often routed directly through cities, intentionally creating an infrastructural segregation that favored affluent neighborhoods at the expense of poor or minority neighborhoods. Roads became boundaries, subtly isolating residents by socioeconomic status; such increasingly visible distinctions encouraged racist views and ultimately devastated neighborhoods. The segmentation systematically diminished opportunities for those residents, heavily impacting people of color and adversely shaping the racial dynamics of American society.

Such widespread negative consequences are not limited to past efforts or malicious intentions. For example, the laudable environmental effort to replace tungsten street lamps with sustainable LEDs is creating a number of significant health and safety problems because the human impact when applied at scale was not thought through sufficiently.

In each example, we see evidence of designers who didn’t seriously consider the long-term social and moral impacts their work might have on the very people they were designing for. As a result, people all around suffered significant negative side effects.

The ur-discipline

Although the process is rarely identified as such, product design is the oldest practiced discipline in human history. It is also one of the most under-examined; only in relatively recent times have we come to explore the ways products exist in the context they impact.

Designers often seek to control the experience users have with their product, aiming to polish each interaction and every detail, crafting it to give a positive—even emotional—experience to the individual. But we must be cautious of imbalance; a laser focus on the micro can draw attention and care away from the macro. Retaining a big-picture view of the product can provide meaning, not only for the user’s tasks, but for her as a person, and for her environment.

Dieter Rams’s ninth principle says that good design is environmentally friendly; it is sustainable. This is generally interpreted to mean the material resources and costs involved in production, but products also affect the immaterial: the social, economic, and cognitive world the user inhabits while considering and using the product.

At a high level, there is an easy way to think about this: your product and your users do not exist in a vacuum. Your algorithms are not fair or neutral. Your careful touch is not pristine.

Your life experiences instill certain values and biases into your way of thinking. These, in turn, color your design process and leave an imprint behind in the product. It’s essentially the DNA of your decisions, something embedded deeply in the fabric of your work, and visible only under extremely close inspection.

Unlike our DNA, we can consciously control the decisions that shape our products and strive to ensure they have a positive impact, even the myriad subtle and non-obvious ways we might not anticipate. Let’s learn to solve the problems we can’t yet see when designing our products.

Design for inclusion

When we set out to design a product, we generally have a target audience in mind. But there are distinctions between functional target audiences and holistic ones. To create products that embrace long-term positive impacts, we must embrace inclusive thinking as comprehensively as we can.

Conduct research into racial and gender politics to broaden your awareness of the social structures that impact your customers’ lives. These structures alter people’s priorities and affect their decision-making process, so design for as many social and societal considerations as possible. Sometimes people who fall outside the “target audience” are overlooked simply because their priorities for your product come in second place in their lives. Design your product to bridge such gaps, rather than ignoring them.

Listen to the voices of people expressing concern and learn to see the pain points they experience, even if they don’t articulate them as such. Step up to your responsibilities as a designer, curator, entrepreneur, or platform owner. You may not be an elected official, but when you offer products you still have responsibility over the roles they play in people’s lives and experiences—so govern accordingly.

Read studies that examine human psychology to understand how people’s biases may be exacerbated by your product. Learn about microaggressions so you can consciously design around them. Extrapolate how people with nefarious goals—from hackers to authoritarian governments—could exploit or abuse your features or the data you collect.

Work with data and let it inform you, but remember that data is suggestive, not authoritative; the data we gather is always a myopic subset of the entirety that exists but cannot possibly be measured. Enrich your process and viewpoint with information, but let your heart drive your design process.

These principles are more than “nice-to-haves”—they help you design with an ethical and moral code as inherent throughout the product as the design system used to build it.

Foster positivity and civility

When we use a product frequently, the DNA of its design process can leave a psychological imprint on us. Facebook knows it can affect people’s moods by putting more positive items in their feeds. When news broke that it did so, people were upset about this manipulation. In actuality, our lives are constantly being manipulated by algorithms anyway; we’re just not very conscious of it. Often, even the people who designed the algorithms aren’t conscious of the deeper manipulative impacts.

Features like upvotes and downvotes may seem like a balanced solution for people to express opinions, but the downvote’s only purpose is to feed and perpetuate negativity; it can be avoided or removed entirely without harmful consequences.

Don’t give angry people shortcuts to wield negative power; make them either articulate their anger or deal with it in more constructive ways. Social media platforms never benefit from angry, biased groups suppressing messages (often positive and constructive) from people they despise. In those scenarios, everyone loses—so why design the option into your product?

Any feature that petty, time-rich people can abuse to game your product’s ranking or discovery algorithms is a feature that eventually serves up toxic behaviors (regardless of the person’s politics) and is best left out.

Also avoid features that simply waste time, because when people waste time they feel less happy than when they do something productive or constructive. And of course, don’t deliberately design time-wasters into your product and offer users a premium fee to avoid them; that’s just not civil.

To foster positive behavior and encourage civility, you can reward good behavior and hold bad behavior accountable. Holding bad behavior accountable is crucial to establishing a credible community or platform—but no rewards for good behavior risks creating a fear-driven atmosphere.

A great example of designing consciously like this is Nextdoor, a platform for local communities. Nextdoor made a purposeful effort to reduce racial profiling by users by redesigning a small part of their product. For example, when reporting “suspicious activity,” new follow-up questions like “What are they doing that’s suspicious?” are required fields, so that users can no longer simply accuse people of color of “being suspicious.” The resulting 75 percent reduction in racial profiling is great for obvious reasons, but it also has the effect that users are actively being trained to no longer associate the two as interchangeable.

Design to avoid vectors of abuse; strive to encourage positive interactions and, wherever possible, challenge and transform existing biases.

Boost confidence and courage

People likely use your product to accomplish something, whether it’s a leisure task or a professional one. A user who repeats certain tasks with your product is effectively practicing her interactions; find the opportunities therein to help her grow as a person, not just succeed as a worker.

For example, when my cofounder and I set out to create Presentate, our goal wasn’t merely to create a web-based version of Keynote or PowerPoint—we set out to help people lose their fear of public speaking, to prevent audiences from experiencing “Death by PowerPoint,” and to create the fastest, most effective presentation software and sharing platform available on any device.

Our business effort was cut short, but our product design goals were achieved even with our alpha software: our users—the presenters—felt more confident and relaxed, found it easier to focus their energies on their talks, and spent far less time creating the presentations (leaving more time to rehearse). Plus, their audiences didn’t suffer through the dreaded stack of bullet points and a monotonous presentation.

Instead of seeing our product as a combination of features and UI, we considered it a tool that could empower people far beyond the scope of their tasks. Your product can do the same if you think about how it could strengthen related skills (in our case, public speaking) the more someone “practices” by using it.

Think about features and insights that encourage people in positive ways; teach them knowledge you have that they might not, perhaps as imposingly as by embedding its principles as features themselves.

Your user is likely a busy person with a million things on her plate—and on her mind. She won’t sit down and think introspectively about how your product affects her life, but you as the designer or developer can and should do precisely that.

You can spend the extra time upfront thinking about how to inform or teach your users new insights or techniques that help build the confidence they are looking for. Empowerment isn’t just the facilitation of a new ability—it’s the emotional and mental strengthening of confidence in your customer when she meets a challenge and accomplishes something impressive.

Strengthen emotional fortitude

Emotional fortitude is the foundation that helps you to be courageous and honest, and to better withstand setbacks. A person who feels emotionally secure has an easier time finding the courage to admit failure or mistakes, which creates opportunities for them to learn and grow. Conversely, emotional fragility erodes a person’s confidence and obstructs personal growth.

People’s emotional states are influenced heavily by external factors. Our environment plays a role in shaping how we see the world, its opportunities, and its problems. But while there’s been extensive research into the role of legislation on our lives, there’s comparatively little research examining the role that products play in our environment. This is becoming pressing as software and technology communicate with us, to us, and about us as frequently as other people do; they now have as much of an effect on our lives as laws and regulations.

Behavioral science and nudge theory strongly suggest that behaviors can be positively influenced by conscious efforts. For instance, rather than mandating certain actions, you could encourage better decisions or actions by making them more prominent or appealing. This kind of influence can and often does extend beyond behaviors and into our states of mind.

To be clear, this is not a deterministic argument—technology and products don’t inherently make us sad or happy, confident or anxious. Rather, this is an argument that products have the potential to influence us in emotional ways, and that the greater a product’s user base and its daily use of the product, the more impactful its effects can be on how they see and experience the world.

The strongest case for this is made by a variety of studies that show that our current social media platforms make people less happy. But what if those platforms had the opposite effect, instead making people happier and more confident about their lives?

One way is to take a teaching approach with your users. When enforcing Terms of Service, for instance, just saying “your actions are unacceptable and violate our ToS” doesn’t explain what was not okay or why you don’t want that kind of behavior. It also doesn’t suggest which behaviors you are looking to see from users. The former approach causes people to feel emotionally insecure, so focus on the latter—on positive kinds of interactions you wish to foster on your platform. They can be actual conversations, or simply part of your marketing and messaging.

Products can also affect our psychological and emotional well-being through the types of behaviors they facilitate and foster. For example, features that can be exploited by petty individuals may result in a great amount of petty behavior on your platform or within your community; we know this behavior creates emotional fragility, not fortitude. On the other hand, features that surprise and delight users (a tenet of great emotional design) can have a fortifying effect on a person’s emotional state.

When designing Presentate, our goal wasn’t “to make slideware”; our goal was to make presenters more confident in their presentation and have greater confidence as speakers. Our means of achieving that goal was to design a slideware product that would accomplish both.

Another fine example is Tesla, a company that makes electric vehicles and associated technology. As its CEO and founder Elon Musk repeats at many of their product announcements, Tesla’s goal—its mission—is to transform us into a renewable-energy human society. In setting its goal accordingly (and explicitly!), Tesla operates on the premise that it needs to do more than simply make a product; it needs to change people’s views and how they feel about their existing products. At the Solar Roof announcement, Musk reiterated that “the key is to make it desirable,” to make something people want regardless of its role in the energy revolution. Similarly, Tesla’s Model S car outperforms many a muscle car in drag races, legitimizing the electric vehicle as a high-performance option for speed enthusiasts. This approach helps to change people’s wider perceptions, extending beyond the products themselves.

When we set our goals not just to create great products, but products that help transform how we think, we can tackle underlying biases and prejudices that people may have but would be happy to be eased out of. We strengthen their confidence and character, and address problems that go well beyond the scope of any one product. And while none of us are solely responsible for fixing major problems in society, each of us, when designing a product, has an opportunity to make it part of the solution.

Or as Nextdoor CEO Nirav Tolia said, when asked about why they changed their design:

We don’t think Nextdoor can stamp out racism, but we feel a moral and business obligation to be part of the solution.

Recreate social mores

There is no digital duality, no “real world” separated from our environment online. Generally, every avatar you talk with on a screen has one or more real people behind it—people with real feelings you can hurt as easily online as you could to their face. You just don’t see it, which shows that we do miss out on a number of social cues when interacting on screen: things like tone, sarcasm, playfulness, hurt feelings—or disapproving frowns from our peers.

A street harasser exploits the lack of a social circle that pressures them to behave decently. Oftentimes this is out of ignorance, not malice, including when the harasser is in the company of others who often are equally unaware that such behavior is unwelcome and uncivil. Many, of course, are in denial and shout catcalls at women despite knowing better—and wouldn’t dare catcall a woman in front of their mothers, for example.

In the digital environment, those external social pressures to behave are often lost, so unless they come to you from the strength you have within, it’s all too easy to slip into behavior you wouldn’t engage in while speaking with someone face to face. Let’s be honest: we’ve all said things to people online at some point or another that we would be ashamed to repeat in person.

From a product perspective, that means we have to rely on mechanisms that either invoke those social mores to encourage civil and fruitful interactions, or outright enforce them. We have to design a simulated social circle of peer pressuring friends into the products we make. Nextdoor did it with form fields that asked follow-up questions. What can your product do?

See the best in people (but be realistic)

People prefer being good and happy over being mean-spirited or awful. You can design your products to encourage the best sides of people, to let them shine in their brilliance, to help them learn and grow while doing their work. But don’t mistake seeing the best in people as a reason not to anticipate harmful behaviors or exploitation of your features.

As product designers we deliberately craft solutions to envisioned problems. We should practice expanding our view to encompass and understand more people and the problems they are experiencing. We should strive to make our work a part of the solution, in ways that scale up to millions of users without harmful side effects.

You’ve read this far. That means you’re eager and ready to think bigger, more holistically, and more empathetically about the work that you do. Armed with these principles, you’re ready to take your product design to the next level.

We can’t wait to see what you’ll create!

Favicon for Symfony Blog 15:49 New in Symfony 3.3: Dependency Injection deprecations » Post from Symfony Blog Visit off-site link

Deprecated dumping an uncompiled container

Contributed by
Roland Franssen
in #20634.

The service container in Symfony applications is usually configured with YAML and XML files, but it's dumped into PHP before the application execution to improve its performance.

Compiling and dumping the container is a rather complex process and in Symfony 3.3 we've simplified it a bit by deprecating the dumping of uncompiled containers. This change won't impact you unless using the stand-alone DependencyInjection component.

Deprecated the DefinitionDecorator class

Contributed by
Christian Flothmann
in #20663.

The Symfony\Component\DependencyInjection\DefinitionDecorator class is confusing because it has nothing to do with service decoration. This class is used to reflect a parent-child-relationship between definitions.

In Symfony 3.3, to avoid any confusion, this class has been deprecated and renamed to Symfony\Component\DependencyInjection\ChildDefinition.

Deprecated the case-insensitivity of service identifiers

Contributed by
Nicolas Grekas
in #21223.

Service identifiers in Symfony applications are case insensitive. This means that if your service id is app.UserManager, you can inject or get that service as app.usermanager, APP.userMANAGER, aPp.UsErMaNaGeR, etc.

In Symfony 3.3 we've deprecated this behavior and service identifiers are no longer case-insensitive. You must inject or get the services using the exact same identifier used in the config files.

In addition to being more correct, removing this feature in Symfony 4.0 will unlock other potential optimizations in the DependencyInjection component code.


Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris

News stories from Wednesday 11 January, 2017

Favicon for Symfony Blog 15:49 New in Symfony 3.3: Memcached Cache Adapter » Post from Symfony Blog Visit off-site link

Contributed by
Rob Frawley and Nicolas Grekas
in #20858 and #21108.

The Symfony Cache component includes several adapters to support different caching mechanisms such as Redis, APCu, the filesystem, etc. In Symfony 3.3, we added a new adapter for Memcached.

When using it as a component, create first the connection to the Memcached server and then instantiate the new adapter:

1
2
3
4
use Symfony\Component\Cache\Adapter\MemcachedAdapter;

$client = MemcachedAdapter::createConnection('memcached://localhost');
$cache = new MemcachedAdapter(\Memcached $client, $namespace = '', $defaultLifetime = 0);

In addition to simple servers, the connection can also be a cluster of Memcached instances with all kinds of custom configuration:

1
2
3
4
5
6
7
8
9
$client = MemcachedAdapter::createConnection(array(
    // format => memcached://[user:pass@][ip|host|socket[:port]][?weight=int]
    // 'weight' ranges from 0 to 100 and it's used to prioritize servers
    'memcached://my.server.com:11211'
    'memcached://rmf:abcdef@localhost'
    'memcached://127.0.0.1?weight=50'
    'memcached://username:the-password@/var/run/memcached.sock'
    'memcached:///var/run/memcached.sock?weight=20'
));

When used in a Symfony application, it's even simpler to configure and use Memcached:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# app/config/config_prod.yml
framework:
    cache:
        # defaults to memcached://localhost
        default_memcached_provider: "memcached://my.server.com:11211"
        # ...
        pools:
            app.cache.products:
                adapter: cache.adapter.memcached
                public: true
                # ...

Now you can start storing and fetching items in your Memcached-based cache:

1
2
3
4
5
6
7
8
$cacheProduct = $this->get('app.cache.products')->getItem($productId);
if (!$cacheProduct->isHit()) {
    $product = ...
    $cacheProduct->set($product);
    $this->get('app.cache.products')->save($cacheProduct);
} else {
    $product = $cacheProduct->get();
}

Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris

News stories from Tuesday 10 January, 2017

Favicon for Symfony Blog 12:04 New in Symfony 3.3: Search in dumped contents » Post from Symfony Blog Visit off-site link

Contributed by
Maxime Steinhausser
in #21109.

In Symfony applications, you can use the dump() function as a better replacement of PHP's var_dump() thanks to the VarDumper component. The dumped contents can be easily navigated with the collapsible toggles, but sometimes it's hard to find values hidden deep inside complex dumps.

In Symfony 3.3, the dumped contents include a local search box to help you find those values more easily:

32077294732_f4653b67fc_z.jpg

To make the search box appear:

  1. Click anywhere on the dumped contents
  2. Press Ctrl. + F or Cmd. + F
  3. Press Esc. to hide the search box

The search experience has been exquisitely polished and it works everywhere:

The web debug toolbar:

4eb71740-ceaa-11e6-92dd-cc61d68a8def.gif

The Symfony profiler:

4eb8013c-ceaa-11e6-8ed9-f9cb40090d55.gif

Raw dumps in any PHP application:

4edfcd98-ceaa-11e6-9c51-873663551f32.gif
Be trained by Symfony experts - 2017-01-23 Paris - 2017-01-23 Paris - 2017-01-25 Paris
Favicon for Zach Holman 01:00 Publicly Dogfooding Your Culture » Post from Zach Holman Visit off-site link

One of the easiest ways to get a job in the early GitHub days was to work on one of our open source projects. We’d start to recognize your username and ability, and you’d be able to get a better idea of how we operated internally. For a small, scrappy, bootstrapped company, this was a huge time saver for us early on.

Something that’s been fascinating to watch over the past few years is the aggressive march towards openness in technology companies. Open source software is a big deal as always, sure, but now startups are opening up their HR policies. They’re doing open product development planning. They’re even doing open salaries.

Companies that espouse this openness benefit from a variety of advantages, ranging from faster product development, quicker support cycles, and sometimes a decent marketing bump. When you’re talking about the organizational side of the company, though, I think there are three very important advantages: recruitment, ramp-up time, and guidance.

Recruitment: attract the right kind of employee

Like attracts like.

If you write about how your culture values a responsible work/life balance, then you’re likely going to attract potential employees who seek that type of balance.

If you write about how your company has real dope lunches, then you’re likely going to attract potential employees who seek real dope lunches.

The more a potential employee knows about you, the better prepared they will be when they finally talk to you. The company won’t often have to get three interviews deep before one of the sides figured out they’re not a good fit.

Ramp-up time: ease the time require to onboard new employees

Along those lines, the more data which is public about your company, the less the ramp-up time is required to onboard a new hire.

The best example of an open startup right now is GitLab. Like many, their product is almost entirely open source, so yeah, that’s cool, but as an organizational geek I really love what they’re doing in terms of openness within the company.

GitLab’s handbook is open source and accessible to anyone. Not only is that really helpful for similar companies to use as a benchmark and to collaborate on cross-company ideas, but it’s particularly great for new hires. Instead of spending a week sitting in a meeting room trying to indoctrinate your new hire, they can just read the handbook. The amount of context that makes available for a potential hire is huge.

GitLab’s been recently playing with being open about their compensation packages, both their option grants and salaries. There’s a big discussion happening around open salaries (see Buffer’s initial post which, in many ways, kicked it off), but taken organizationally, it’s really helpful in terms of making employee onboarding quicker and less ambiguous.

Guidance: think about — and then hold yourself — to a standard

Once you’ve written something down, you’re forced to defend it.

If your company has a value that you’ve publicly written down as “don’t steal shit from the customer” — to take an extreme example — then when someone in the company suggests doing something Really Wrong you can point back to your written value and say hey, maybe we shouldn’t do this.

The same goes for the reverse, too. When a company grows over time, policies and values will sometimes shift or change completely. That’s the nature of growth. Once you drawn a line in the sand, you can later change that line. (It is written in sand, after all.) To realize whether you’ve outgrown your old values, you’re forced to spend time thinking about those values.

I’ve known companies who would say cultural things internally, kind of as a catch phrase. But since they weren’t written down and defended against, either publicly or even internally, no one sat down to actually think about whether the company still held those values. Then everyone looked like a dork, continually puppeting out these things that weren’t relevant anymore. Shit’s just weird at that point, and further decreases the effectiveness.


I’m just a big fan of being open and transparent about a company. Everyone learns more. Your company is built on a stronger foundation. You can more easily adapt when things change by reexamining your positions over time.

Write more, talk more, be more honest.

Favicon for A List Apart: The Full Feed 00:19 This week's sponsor: WEBEDITION » Post from A List Apart: The Full Feed Visit off-site link

WEBEDITION, the best way to craft and deliver issues of your own online magazine. Start now and build an issue for free.

News stories from Monday 09 January, 2017

Favicon for the web hates me 09:00 Webmontag – The lean tester » Post from the web hates me Visit off-site link

Dieses Jahr hatte ich die Chance auf dem Hamburger Webmontag einen Vortrag zum Thema „the lean tester“ zu halten, was ich natürlich auch gemacht habe. Die Idee hinter dem Lean Tester war die Beobachtung, dass wir verlernt haben einfach zu denken. Das beginnt bei der Softwareentwicklung, aber endet nicht bei der Qualitätssicherung. Wenn wir also […]

The post Webmontag – The lean tester appeared first on the web hates me.

News stories from Tuesday 03 January, 2017

Favicon for A List Apart: The Full Feed 16:00 The Imbalance of Culture Fit » Post from A List Apart: The Full Feed Visit off-site link

When I started Bearded back in 2008, I’d never run a business before. This lack of experience meant I didn’t know how to do many of the things I’d ultimately have to do as a business owner. One of the things I didn’t know how to do yet? Hiring.

When it came time to start hiring employees, I thought a lot about what the company needed to advance, what skills it was lacking. I asked friends for advice, and introductions to people they knew and trusted who fit the bill. And I asked myself what felt like a natural question: would I want to hang out with this person all day? Because clearly, I would have to.

The trouble with this question is that I like hanging out with people I can talk to easily. One way to make that happen is to hang out with people who know and like the same books, music, movies, and things that I do; people with similar life experiences. It may not surprise you to learn that people who have experienced and enjoy all the same things I do tend to look a whole lot like me.

The dreaded culture fit

This, my friends, is the sneaky, unintentional danger of “culture fit.” And the only way out I’ve found is to recognize it for what it is–an unhelpful bias–and to consciously correct for it.

Besides being discriminatory by unfairly overvaluing people like yourself, hiring for culture fit has at least one other major detriment: it limits perspective.

At Bearded, our main focus is problem solving. Whether those are user experience problems, project management problems, user interface problems, or development problems–that’s what we do every day.

I’ve found that we arrive at better solutions faster when we collaborate during problem solving. Having two or more people hashing out an issue, suggesting new approaches, spotting flaws in each other’s ideas, or catching things another person missed–this is the heart of good collaboration. And it’s not just about having more than one person, it’s about having different perspectives.

Perspective as a skill

A simple shortcut to finding two people who look at the world differently is to find two people with varied life experience–different genders, races, religions, sexual orientations, economic backgrounds, or abilities… these factors all affect how we see and experience the world. This means that different perspectives–different cultures–are an asset. Varied perspective can be viewed, then, as a skill. It’s something you can consciously hire for, in addition to more traditional skills and experience. Having diverse teams better reflects our humanity, and it helps us do better work.

This isn’t just my experience, either.

According to research

conducted by Sheen S. Levine and David Stark, groups that included diverse company produced answers to analytical questions that were 58 percent more accurate.
When surrounded by people “like ourselves,” we are easily influenced, more likely to fall for wrong ideas. Diversity prompts better, critical thinking. It contributes to error detection. It keeps us from drifting toward miscalculation.

Smarter groups and better problem-solving sounds good to me. And so does increased innovation. In her article for Scientific American, Katherine W. Phillips draws on decades of research to arrive at some exciting conclusions.

Diversity enhances creativity. It encourages the search for novel information and perspectives, leading to better decision making and problem solving. Diversity can improve the bottom line of companies and lead to unfettered discoveries and breakthrough innovations. Even simply being exposed to diversity can change the way you think.

Phillips isn’t alone in linking diversity to profit. A Morgan Stanley analysis released in May 2016 showed that gender-diverse companies delivered slightly better returns with lower volatility than their more homogenous peers.

Seems like we’d be crazy not to be thinking about building more diverse teams, doesn’t it?

People make the culture

I recently spoke at Web Directions 2016 in Sydney, and was lucky enough to listen to a talk on gender in the tech industry by Aubrey Blanche from Atlassian. Aubrey made a point of how Atlassian has shifted its perspective from finding people who fit their culture, to having a culture defined by its people.

When hiring, this means tossing out the whole “do I want to hang out with them?” question. Instead, I’ve tried to replace that with more specific, more culture-agnostic questions:

  • Are they kind and empathetic?
  • Do they care about their work?
  • Do they have good communication skills?
  • Do they have good self-management skills?

If the answer to each of these questions is yes, then it’s very likely I will want to hang out with them all day, regardless of which movies they like.

As Aubrey points out, we can then focus on values, and leave culture alone. Our values might be that we treat each other well, that we do great work that we care about, and that we are largely independent but communicate well when it’s time to collaborate. Then we can also include this new question:

  • Do they bring a valuable new perspective?

Hiring based on these values will naturally build a culture that is more comfortable with diversity, because the benefits of diversity become more clear in our daily experiences.

Encouragement and change

Now you don’t need me to tell you no one’s perfect. But when it comes to emotional, high-stakes topics like this, you can see people getting caught in the crosshairs of reproach–and that’s scary to watch. Sometimes it can feel as if we’re all one questionable tweet or ill-considered joke away from public humiliation.

That in mind, let me tell you about a time when I was an idiot.

For context, you should know that I’m a white, heterosexual, cisgender male who grew up in a stable, upper-middle-class environment, and now runs his own business. I pretty much tick all the privilege checkboxes.

Last year at a design conference, I was chatting with industry friends. At some point I brought up a meme that I thought was funny, until one of my friends pointed out that it was sexist. And he was right.

Oh crap, I thought: I’m that guy at the conference, I’m a terrible person. Luckily my friend went easy on me. He understood how I missed the underlying sexist assumptions of the joke, and was happy to bring that to my attention without extending the accusation of sexism to me, personally. He effectively reassured me that I could do something bad, while still being a good person. He gave me the option to admit bad behavior and correct it, without hating myself in the process.

And this, I think, may be the key for people in my very privileged position to change. When problems like this come up, when we make missteps and unveil our biases and ignorance, it’s an opportunity for change. But the opportunity is often much more delicate than any of us would like. Successfully navigating a situation like that requires sensitivity and control from both sides.

For the transgressor, being called out on an issue can feel like being attacked, like an indictment. For those of us who aren’t used to being made uncomfortable, that can be shocking. It can be something we might want to quickly deny, to reject that discomfort. But not all discomfort is, in the end, a bad thing. To give others’ feelings and concerns merit–to validate their different perspective–may require us to sit with our own hurt pride or injured self-image for a bit. Something that may help us through these difficult feelings is to remember that there is a big difference between behaviors and identity. Bad behavior is not immutable. Quite the opposite, bad behavior is often a first step toward good behavior, if we can withstand the discomfort of acknowledging it, and muster the strength to change.

It’s tough getting called out for bad behavior, but things aren’t exactly simple on the other side of the confrontation, either. When we’re offended by someone’s ill-considered words or actions, it can cut to the quick. We might feel required to respond with the full force of our anger or outrage. After all, why should we be expected to police our own tone, when we’re responding to words that weren’t prepared with our feelings in mind? It can be hard, but employing our empathy, our compassion—along with our critique—can be the best way to affect the positive change we want to see.

Right now, you’re doing your best. But we can all do better. Recognizing that we’re doing some bad things doesn’t make us bad people. You have the courage to see what you’ve been doing wrong (unintentionally, I know) and fix that. You can admit to having unfair privileges in the world, without it being your fault for having ended up that way. The world is terribly, horribly unfair. It may very well get worse. But when we have sway, even over a tiny part of it, we have to do our best to balance those scales, and make things a little better. I can do more, and so can you. So let’s see if we can’t get to an even better place in 2017, together.

Acknowledgements

Many thanks to Aubrey Blanche and Annette Priest for their thoughtful consideration and feedback on this article. My infinite gratitude goes out, as always, to my editor Rose Weisburd, who helped me find my way even more than usual this time around.

News stories from Thursday 22 December, 2016

Favicon for A List Apart: The Full Feed 16:00 Learning from Lego: A Step Forward in Modular Web Design » Post from A List Apart: The Full Feed Visit off-site link

With hundreds of frameworks and UI kits, we are now assembling all kinds of content blocks to make web pages. However, such modularity and versatility hasn’t been achieved on the web element level yet. Learning from Lego, we can push modular web design one step forward.

Rethinking the status quo

Modular atomic design has been around for a while. Conceptually, we all love it—web components should be versatile and reusable. We should be able to place them like bricks, interlocking them however we want without worrying about changing any code.

So far, we have been doing it on the content block level—every block occupies a full row, has a consistent width, and is self-contained. We are now able to assemble different blocks to make web pages without having to consider the styles and elements within each block. That’s a great step forward. And it has led to an explosion of frameworks and UI kits, making web page design more modular and also more accessible to the masses.

Achieving similar modularity on the web element level is not as easy. Pattern Lab says we should be able to put UI patterns inside each other like Russian nesting dolls. But thinking about Russian nesting dolls, every layer has its own thickness—the equivalent of padding and margin in web design. When a three-layer doll is put next to a seven-layer doll, the spacing in between is uneven. While it’s not an issue in particular with dolls, on web pages, that could lead to either uneven white space or multilevel CSS overrides.

I’ve been using Bootstrap and Foundation for years, and that’s exactly what would happen when I’d try to write complex layouts within those frameworks—rows nested in columns nested in rows, small elements in larger ones, all with paddings and margins of their own like Russian dolls. Then I would account for the nesting issues, take out the excessive padding on first- and last-child, calculate, override, add comments here and there.

It was not the prettiest thing I could do to my stylesheets, but it was still tolerable. Then I joined Graphiq, a knowledge company delivering data visualizations across more than 700 different topics. Here, content editors are allowed to put in any data they want, in any format they want, to create the best experience possible for their readers. Such flexibility makes sense for the small startup and we have a drag and drop interface to help organize everything from a single data point to infographics and charts, to columns, blocks, and cards. Content editors can also add logic to the layout of the page. Two similar bar charts right next to each other could end up being in quite different HTML structures. As you can imagine, this level of versatility oftentimes results in a styling hell for the designers and developers. Though a very promising solution—CSS Grid Layout—is on the horizon, it hasn’t made its way to Chrome yet. And it might take years for us to fully adapt to a new display attribute. That led me to thinking if we can change the Russian doll mentality, we can take one step further toward modular design with the tools available.

Learning from Lego

To find a better metaphor, I went back to Lego—the epitome of modular atomic design. Turns out we don’t ever need to worry about padding and margin when we “nest” a small Lego structure in a large Lego structure, and then in an even larger Lego structure. In fact, there is no such concept as “nesting” in Lego. All the elements appear to live on the same level, not in multiple layers.

But what does that mean for web design? We have to nest web elements for the semantic structure and for easy selecting. I’m not saying that we should change our HTML structures, but in our stylesheet, we could put spacing only on the lowest-level web elements (or “atoms” to quote atomic design terms) and not the many layers in between.

Take a look at the top of any individual Lego brick. If you see the space around the outside of the pegs as the padding of a web element, and everything inside the padding as the content, you will find that all Lego bricks have a consistent padding surrounding the content, which is exactly half of the gap between elements.

Lego bricks seen from the top, with the pegs representing box content and the padding highlighted to show its consistent width no matter the number of pegs on a brickNo matter how many pegs a brick has, the padding around them is the same as on every other Lego.

And when Lego bricks are placed together, all the elements will have the same gutter in between.

A rectangle of Lego bricks seen from the top, edges touching, each with a thin line between the pegs and the paddingThe padding extends from the outer edge of the pegs of one brick to the outer edge of the pegs of the neighboring brick.

No other padding or margin needed; the gaps are naturally formed. All the elements—no matter how deeply they are nested—appear to be on the same level and need no CSS override or adjustment, not even the first- and last-child reset.

Putting it in code, we can make a class that adds the half-gutter spacing, and apply it to all the lowest-level web elements on the page. Then we can remove all the spacing on structural divs like .row and .col.

$gutter: 20px;
.element {
	 padding: $gutter / 2;
}

One tiny tweak to be mindful of is that when the padding is only on .element, the padding between the outermost elements and the parent div would only be half the gutter.

White rectangle representing the complete assembly of bricks in the previous figure with gray rectangles representing the area taken up by pegs, showing that the padding creates full-size gutters between peg areas but only half-size gutter around the periphery.The periphery is only half the gutter, for now.

We need to add the same padding to the outermost container as well.

Peg areas shown as gray rectangles within a white rectangle representing the complete assembly, edges between bricks shown as black lines, and an extra pink line around the outermost edge showing added padding.The outermost container with a half gutter added all around.
$gutter: 20px;
.container,
.element {
	padding: $gutter / 2;
}

And that will result in this:

White rectangle with gray rectangles and consistent padding between and around them.With the added outermost padding, all the padding looks the same.

Think about how many layers of overrides we would need to create this layout with the current rows and columns mentality. The best we can do is probably something like this:

Diagram representing the previous Lego brick rectangle arrangement in rows and columns.The same layout via rows and columns.

And in code:

See the Pen Complex layout the old way by Samantha Zhang (@moyicat) on CodePen.

With the Lego mentality, the spacing and the code can be much simpler, as shown in the two examples below:

Example with div:

See the Pen Complex layout with div by Samantha Zhang (@moyicat) on CodePen.

Example with Flexbox:

See the Pen Complex layout with flexbox by Samantha Zhang (@moyicat) on CodePen.

More flexible than Lego

Lego is a true one-size-fits-all solution. With Lego, we don’t get to tweak the padding of the bricks according to our projects, and we can’t have different horizontal and vertical padding. Web design offers us much more variation in this area.

Instead of just setting one value as the gutter, we can set four different variables and get more flexible layout this way:

$padding-x: 10px;
$padding-y: 20px;
$padding-outer-x: 40px;
$padding-outer-y: 30px;

.container {
  padding: $padding-outer-y $padding-outer-x;
}
.element {
  padding: ($padding-y / 2) ($padding-x / 2);
}

The result looks like this:

The same arrangement of rectangles representing bricks, but with different values for the x and y padding.Unlike with physical Lego pieces, we can set different values for the padding.

It’s still modular, but also has varying spaces to create a more dynamic style.

With responsive design, we could also want different spacing for different media queries. We can take our approach one step further and write our logic into a Sass mixin (alternatively you can do it with LESS, too):

@mixin layout ($var) {

  $padding-x: map-get($var, padding-x);
  $padding-y: map-get($var, padding-y);
  $padding-outer-x: map-get($var, padding-outer-x);
  $padding-outer-y: map-get($var, padding-outer-y);

  .container {
    padding: $padding-outer-y $padding-outer-x;
  }
  .element {
    padding: ($padding-y / 2) ($padding-x / 2);
  }
}

Using this mixin, we can plug in different spacing maps to generate CSS rules for different media queries:

// Spacing variables
$spacing: (
  padding-x: 10px,
  padding-y: 20px,
  padding-outer-x: 40px,
  padding-outer-y: 30px
);
$spacing-tablet: (
  padding-x: 5px,
  padding-y: 10px,
  padding-outer-x: 20px,
  padding-outer-y: 15px
);


// Generate default CSS rules
@include layout($spacing);


// Generate CSS rules for tablet view
@media (max-width: 768px) { 
  @include layout($spacing-tablet);
}

And as easy as that, all our elements will now have different spacing in desktop and tablet view.

Live example:

See the Pen Complex layout with mixin and varying gutter by Samantha Zhang (@moyicat) on CodePen.

Discussion

After using this method for almost a year, I’ve encountered a few common questions and edge cases that I’d like to address as well.

Background and borders

When adding backgrounds and borders to the web elements, don’t apply it to the .element div. The background will cover both the content and padding areas of the element, so it will visually break the grid like this:

Gray rectangles with white padding, with one of the rectangles and its padding replaced by a picture.A background applied to .element breaks the grid.

Instead, apply the background to a child div within the .element div:

<div class="element">
  <div style="background-image:url();"></div>
</div>
Gray rectangles with white padding, with one of the rectangles replaced by a picture.A child div contains the image so it doesn’t break the grid.

I used this structure in all my examples above.

Similarly, the border goes around the padding in the box model, so we should also apply the border of the element to a child div to maintain the correct spacing.

White rectangle with blue borders around content areas.As with a background, apply a border to a child div.

Full row elements

Another common issue occurs because we occasionally want full row elements, conceptually like this:

A gray horizontal rectangle with the words Title Here atop a white rectangle holding smaller gray rectangles.Sometimes we want to break the grid with full row elements.

To style full row elements following the .container and .element structure, we need to make use of negative margin:

.element-full-row {
  margin: 0 (-$padding-outer-x);
  padding: ($padding-y / 2) ($padding-x / 2 + $padding-outer-x);
}

Notice that we need to add back the $padding-outer-x to the padding, so that the content in .element-full-row and the content in .element align.

A gray horizontal rectangle with the words Title Here atop a white rectangle holding smaller gray rectangles and a dashed line to show how the words align on the left with the left-most of the small rectangles.The content in .element-full-row and the content in .element align.

The code above handles the horizontal spacing, and the same logic can be applied to take over vertical spacing as well (as shown in the example above–the header element takes over the top padding). We can also add a negative margin very easily in our stylesheets.

.element-full-row:first-child {
  margin: (-$padding-outer-y) (-$padding-outer-x) 0;
  padding: ($padding-y / 2 + $padding-outer-y) ($padding-x / 2 + $padding-outer-x) ($padding-y / 2);
}

It can be applied as a standalone rule or be included in the Sass or LESS mixin, then you will never have to worry about them again.

Nesting

The full freedom in nesting is the strong suit of this Lego CSS method. However, there is one kind of nesting we can’t do–we can’t ever nest an .element within an .element. That will create double padding and the whole point of this method would be lost. That’s why we should only apply the .element class to the lowest level web elements (or “atoms” to quote atomic design terms) like a button, input box, text box, image, etc.

Take this very generic comment box as an example.

Generic comment box with title, text area, helper text, and button.

Instead of treating it as one “element,” we need to treat it as a pre-defined group of elements (title, textarea, button, and helper text):

<div class="comment">
  <h3 class="comment-title element">Add a new comment</h3>
  <textarea class="element"></textarea>
  <div class="clearfix">
     <div class="float-left">
       <div class="element">
         <button class="btn-post">Post comment</button>
       </div>
     </div>
     <div class="float-right">
       <div class="helper-text element">
         <i class="icon-question"></i>
           Some HTML is OK.
       </div>
     </div>
   </div>
</div>

Then, we can treat .comment as one reusable component–or in the atomic design context, a “molecule”–that will play well with other reusable components written in the same manner, and can be grouped into higher level HTML structures. And no matter how you organize them, the spacing among them will always be correct.

Varying heights and layouts

In the bulk of this article, we’ve been using the same fitted row example. This may lead some to think that this method only works for elements with defined height and width.

It’s more versatile than that. No matter how elements change in height and width, lazy load, or float around, the Lego-like padding will ensure the same consistent gap between elements.

Pinterest flow layout of 5 columns.I made a quick Pinterest flow layout to demonstrate how this mentality works with fluid and changing elements.

See the Pen Pinterest Flow by Samantha Zhang (@moyicat) on CodePen.

Maintenance

Some of you might also be worrying about the maintenance cost. Admittedly, it takes time to learn this new method. But once you start to adopt this mentality and write CSS this way, the maintenance becomes extremely simple.

Especially with the layout mixin, all the spacing rules are centralized and controlled by a few groups of variables. A single change in the variables would be carried out to all the elements on the web page automatically.

In comparison, we might have to change padding and margin in 20 different places with the old method, and then we have to test to make sure everything still works. It would be a much more hectic process.

Grid layout

And finally, there is the Grid layout, which supports very complicated layouts and nests much more gracefully than block. You might be thinking this is quite a lot of hard work for a problem that is actually going away.

While many of the issues we talked about in this article might go away with Grid, it might take Grid years to get browser support. And then, it might take a long time for the community to get familiar with the new method and develop best practices and frameworks around it. Like Flex–it’s already supported by most browsers, but it’s far from widely adopted.

And after all, it could take a typical web user a long time to understand Grid and how that works. Similarly, it would require quite a lot of development for us to translate user layout input into good CSS Grid code. The old by-column and by-row method is way easier to understand, and when nesting is not an issue, it could stand as a good solution for websites that allow user configuration.

Conclusion

We started to implement this method at Graphiq in the beginning of 2016. Almost a year in, we love it and believe this is how we should write web layouts in the future. As we refactor each page, we’re deleting hundreds of lines of old CSS code and making the stylesheets way more logical and much easier to read. We also got far fewer layout and spacing bugs compared to all our refactors in the past. Now, no matter how our content editors decide to nest their data points, we’ve got very little to worry about.

From what we’ve seen, this is a real game changer in how we think about and code our layouts. When web components are modular like Lego bricks down to the elements level, they become more versatile and easier to maintain. We believe it’s the next step to take in modular web design. Try it for yourself and it might change the way you write your web pages.

News stories from Monday 19 December, 2016

Favicon for Zach Holman 01:00 Kicking the @realdonaldtrump Beehive » Post from Zach Holman Visit off-site link

Had a bit of free time yesterday on a particularly lazy Sunday, so I decided to tweet the president elect.

I was in a jovial mood and Trump was in a pissy mood so I thought I’d cheer him up by throwing him a bone (I assume he recharges his life force through absorption of internet hatred).

I spent the next few hours dealing with hundreds of replies from fervent Trump devotees and/or bots. Twenty four hours later, I’m still getting a handful of replies every ten minutes or so, along a with few more faves every thirty seconds or so.

There’s a few things I found amusing and tepidly surprising about all of this.

Twitter’s threading has an impact

This was the first mild surprise. I made a smarmy toot at the future leader of the free world and apparently that got a lot of people riled up. So far I’ve gotten 600 faves on the first tweet, and on some of the replies I’ve gotten even more:

Conversation with Trump supporters

Right now I’m the third “thread” in the replies to the original tweet on twitter.com. That’s a ton of additional exposure for my original tweet and, in my case, seven additional tweets from a dialogue I had with some white dude who joined Twitter in November 2016.

This generates a ton of extra faves and replies, to the point of being alarming; if you’re just trying to blow off steam and drunktweet dear ol’ Donny and your tweet gets in the main threading, then you could have a lotta grumpy people yelling at you. The internet is small. And big. Somehow at the same time.

These people are grumpy

From what I can tell, most of the people who reply to replies on Donald Trump’s tweets almost exclusively use Twitter for this purpose. There was a fair amount of bot-style anonymous users who would jump in — on both sides — but there was also a lot of people with only one or two tweets on their account that weren’t in reply to something else.

I did get called a few names; libtard is real hot right now, as is jerk and idiot. cuck seems to be on the decline. I was not called any sexist names or racist names because I’m playing life on white male easy mode. Had I been representing as a woman, this would have been a different and not at all mildly amusing lazy Sunday experiment. Even in easy mode, I did get a couple death/well-being threats, so yeah, neat I guess.

Expectations of discussion

lol yeah there’s not really a real approach you can take to have a discussion here, from what I could tell. I took a few different approaches to replies to my tweet, but most of them held at least the possibility of having a kind of rational debate, but the closest I got was mostly getting yelled at every five minutes for two hours (although at the end I did wish her happy holidays and a good night and she wished me merry christmas and that thank god we’re free unlike the far east, which I graciously credit her with half a point for it).

I have 40,000 followers (and am Verified™, whatever that means), so I think that also put a different target on me. I did reply to a whole ton of the replies to my tweet, and I was kind of amused how many times an account would tweet once at me and never respond back, even after I asked them a clarifying question. I think we’re used to people with a “following” online to not really interact back with you, and as such it’s fairly easy to just push bile out into the void to make yourself feel better. Then again, they’ll do that anyway, so what do I know.

Spitting fire

Anyway I don’t really have a ton of amazing takeaways from this; just thought it was interesting. I was in the right mindset to tackle discussions and hatred, and if you’re in the right mindset, go for it; it does give you a context on The Other Twitter™ that honestly, I don’t get from my normal Twitter timeline. If you’re not in the right mindset, though, run off and hide; even if it’s all fun and games, your thick skin can feel thinner when you have hundreds of people yelling at you, unfortunately, and getting caught in their fervor sucks.

Anyway, in light of not having an impressive takeaway from this post, I just wanted to end with what I consider to be my best comeback of all time in any medium. Happy holidays.

News stories from Thursday 15 December, 2016

Favicon for A List Apart: The Full Feed 16:00 Demystifying Public Speaking » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Lara Hogan's new book, Demystifying Public Speaking, available now from A Book Apart.

Before you near the stage, before you write the talk, before you even pick a topic, take time to get comfortable with the idea of giving a talk.

You’re reading this because something about public speaking makes your palms sweat. You aren’t alone; when I created an anonymous survey and asked, “What’s your biggest fear about public speaking?” I received over 300 replies. Though the fears all revolved around being vulnerable in front of a large group of people, I was surprised how widely the responses ranged.

See for yourself—I’ve grouped a handful of replies to illustrate the spectrum of fears.

People are worried about their voices:

  • “The sound or pitch of my own voice.”
  • “Voice cracking up—I forget to breathe from the diaphragm and come across sounding nervous and uninformed.”
  • “Forgetting or skipping over what I want to say, heart racing (and getting out of breath quicker), getting tongue-tied.”

People are worried about their bodies:

  • “Being judged for being fat, not on my presentation content.”
  • “In middle school I got something in my eye during a class presentation, and my eyes would not stop watering. I’m terrified it will happen again.”
  • “Needing to pee during the speech!”
  • “Falling on stage.”
  • “People judging my appearance, whether I’m dressed appropriately.”

People are worried about technical or wardrobe malfunctions:

  • “Problems connecting laptop to projector.”
  • “Making stupid coding mistakes during live coding.”
  • “Open pants zipper (because it’s happened).”

People are worried about being wrong and being challenged:

  • “Elegantly explaining something that is actually wrong.”
  • “Showing that I’m ignorant about something I thought I was knowledgeable about.”
  • “Getting a question I can’t even begin to answer.”
  • “Being wrong and being called out on stage during Q&A.”
  • “Getting heckled.”
  • “Vocal skeptics or doubters.”

People are worried about their performance:

  • “Not being impressive enough.”
  • “That everything I say becomes so messy anyone can refute it.”
  • “Since I’m not a native English speaker, my biggest fear is not making any sense when speaking.”
  • “That no one learns anything, and the audience is starkly aware of it.”
  • “Being exposed for the fraud I have always felt like.”

Phew. Given the potential for these moments of total—human—disaster, why should we even bother embarking on this journey toward the stage?

To start, public speaking (or put another way, broadcasting your abilities and knowledge) has definite career benefits. You grow your network by meeting attendees and other speakers, and you gain documented leadership experience in your subject area. People looking to hire, collaborate with, or fund someone with your topic expertise will be able to find you, see proof of your work, and have a sense of the new perspective you’ll bring to future projects.

Those professional benefits are huge—but in my experience, the personal benefits are even more substantial. Giving a talk grows so many skill sets: crafting a succinct way to share information, reading an audience, and eloquently handling an adrenaline-heavy moment. You’ll prove something to yourself by overcoming a major fear, and you should take pride in knowing you taught a large group of people something new that will hopefully make their work or lives easier. Public speaking experience boosts a lot of knock-on benefits too, like a stronger visa application or more confidence in your everyday spotlight moments, like a standup meeting, code review, design critique, or other project presentations.

No matter the impetus, trying your hand at public speaking is a brave act. While it’s a different challenge for everyone, we do have a few tools to help tackle our fears.

Flip that fear around

First, give yourself permission to be anxious. Even renowned speaker and industry veteran Eric Meyer still gets nervous giving talks, as detailed in his article “The Stages of Fear”:

A hundred public talks or more, and it’s still not easy. I’m not sure it ever will be easy. I’m not sure it ever should be easy. […] Every speaker I know feels pretty much exactly the same. We don’t all get the same nervous tics, but we all get nervous. We struggle with our fears and doubts. We all feel like we have no idea what we’re doing.

Being nervous is totally normal. Consider what you’re juggling: sharing information, entertaining the audience, and guessing (or worrying over) how you’re being perceived. Keep in mind, though, being nervous is not a sign you’ll do poorly. Public speaking isn’t an everyday context, and you may still get butterflies even as you gain experience and improve your speaking game.

But if you can’t coolly eliminate all your fears and nerves like some stoic robot, what can you do? One tactic is to try reframing your anxiety in a positive or motivating way, as designer Lea Alcantara suggests:

Instead of worrying, flip your perception of nerves as an indication you care as opposed to dread of failure. There is no shame in caring deeply about a subject and what people think about your talk.

Caring feels a lot more approachable than dreading failure, and it gives you a way through: use your body’s natural reaction to stress to improve your talk. Invest that energy into more research of your topic, more practice, and more feedback-gathering—all acts within your control. Let your nerves become part of the process—or try accepting that—and just maybe, in time, they’ll feel more useful than disastrous.

What makes you tick?

To flip your fears into motivations, let’s dip into what makes you tick. Understanding who you are will help you determine where to invest that extra energy as you make your way toward the stage. Once you begin to name what scares you, what comforts you, and what drives you, you’ll be able to home in on which talk format, topic, venue type, and preparation style will calm those fears and build your excitement.

To get you started, think through these:

  • What makes you most excited when you think about public speaking? What do you want to get out of it?
  • What makes you most nervous when you think about public speaking?
  • What scenarios do you want to avoid?
  • What size audience do you think you might be most comfortable speaking to? Why?
  • Whose feedback matters most to you on your talk or presentation style?
  • What would you want people to take away from your talk?
  • What do you want to happen for you or your career after your talk? (Examples: someone offers you your dream project, someone you admire asks for your advice, people shower you with praise, you get right back to work, etc.)

That’s a lot of introspection, but it’s worth it. As we move through this book, we’ll go through the varied paths and aspects of public speaking, and your answers will guide you to the right fit for you. For instance, if you’re afraid of seeing a sea of strange faces before you, maybe a smaller meetup is the best venue to get in some practice. If you’re afraid of saying something patently false onstage, then pick a topic like a case study from your work that you know inside and out, and practice your Q&A session with friends who can help you fact-check your content. Or, if you’re excited to teach people skills they can immediately put into practice, opt for a workshop format and give hands-on help to folks. Whatever your goals and style, you can find a speaking opportunity that resonates with you.

Move beyond the “rules”

You’ve heard the adages: don’t say “um,” don’t say “uh.” Excise “like” with extreme prejudice. Don’t use bullets on your slides. Never, ever read from your notes. Some folks have an archetype of what a great speaker sounds like, or an audience size that feels real, or this idea you need to give a deeply technical or novel talk for it to count.

But you know what? If I can say one thing in this book about giving talks, it’s do what works for you. Truly.

Of course, it’s hard to move past the impulse to embrace rules—it’s reassuring to think we have a straightforward map to success. We try to mimic speakers who capture our attention or those whom our peers praise. We hold up examples of “ideal” presentation styles, and we instruct new speakers to follow suit. We see a lot of the same people, and we can’t help but absorb a lot of the same opinions on what a good speaker looks like or sounds like.

Just because we’ve built a system, it doesn’t mean it’s right. What we need to see represented onstage is a spectrum of speakers with different insights and ways to teach us about them. Your voice is valuable, and your own. If you choose to share it, we will all certainly be the better for it.

Public speaking is a journey that, like any other, involves practice and time to make you feel comfortable and successful. Take heart from Tiffani Jones Brown:

The worst case scenario is your talk flops—in which case you’ll be stronger for it. The likelier scenario is you’ll give a couple decent talks, followed by better ones, followed by even better ones, until you give one that really makes a difference.

I don’t want to set out any rules in this book—forget them. What I do hope is to help you forge your own path, so you make your way to that talk that makes a difference. Let’s get started.

Want to read more?

This excerpt from Demystifying Public Speaking will help you get started. Order the full copy today, as well as other excellent titles from A Book Apart.

Demystifying Public Speaking by Lara Hogan

 

Favicon for Zach Holman 01:00 Working Remotely in Cafes and Possibly Even Surviving » Post from Zach Holman Visit off-site link

Alternative titles: Retaining Your Humanity While Working in Cafes, or possibly How To Have Your Coffee and Eat It Too, or even the tried and true There’s So Many Damn Weirdos Leeching on the Wi-Fi.


These days, I spend the majority of my week working out of different cafes. Since I don’t have an office, the coffee shop is a hallowed place of productivity for me. From tiny Italian coffee spots in San Francisco’s North Beach, to cozy cat cafes in Taipei, to the Hard Rock Cafe in the Mission, I’m pretty good at adjusting to different environments.

There’s a lot of secrets I’ve discovered in the past year to keep my sanity while doing all of this, though. Most of them don’t involve drugs, and some of them are even legal.

Knowing where to go when you need to go

Most coffee shops are filled with scum who had previously been unceremoniously fired from their “real” jobs and are now just floating through life, living off the graces of wireless internet for the low price of a single cup of tea over six hours. It’s unsurprising that with a clientele of such villainy that the provided toilets possibly might look like a bomb had gone off in them at some point during the Nixon administration.

a gross toilet tho

This appears to be the case even when it comes to the really fancy hipster cafes that are all the rage right now amongst the People Who Wear Black Framed Glasses Crowd. Besides which, if it’s a small cafe, who wants to stand in line at what is most likely a single occupancy bathroom? There’s nothing worse than watching the door open up, looking the dearly departed directly in their eyeballs, and mentally communicating with all your face that I know exactly what you were doing in there. I mean, there’s nothing worse except being on the other end of that, I guess.

You deserve better. I mean, you’re a good person. You didn’t even vote for a third party this year. You deserve luxury. You deserve privacy. You deserve four star excellence when it comes to taking care of your business.

What better place to enjoy four star service than at a four star hotel?

If you aren’t located in a proper office with proper toilets, just visit a luxury hotel and enjoy a moment to yourself in styleeeeee. Go somewhere disgustingly gaudy and imagine you’re a jerk for awhile. Treat yoself.

For example, this would be an ideal location to take a shit:

a gross family tho

Start making a mental map of where the best hotels are in your city, and if they have a large, semi-public lobby area. Chances are if it’s a hotel that does a lot of conference business they’re going to have bathrooms in the lobby or conference area (and most likely both). Conference hotels are the best, really, because they deal with large economies of scale as they handle possibly hundreds of people at once, leading to toilet stalls and urinals as far as the eyes can see.

You’re literally in the lap of luxury.

If you’re worried about being called out by the staff as a possible pedestrian pooper, well, you’re very unlikely to arise suspicion. Even so, just remember the golden rule of life: act like you belong there, and everyone will just tend to go with it. Also no one cares.

Retaining your street cred while at a Starbucks®™©

Every now and then you’re going to be stuck in a weird area of town. You’re in the barren wasteland of cafes in your city, devoid of anything with Four Barrel® or Blue Bottle™ or any other mixtures of {Adjective + Container} brands of coffee you can come up with.

So you’re resigned to a singular choice that no proud coffee-head can be happy with: you have to go work out of a Starbucks™.

I know, gross. Starbucks® is so proletarian, and they come from Seattle, and things from Seattle haven’t been cool since at least February, and they burn their coffee beans (whatever that means) and probably their children too.

In these situations I like to lie and tell people that I’m waiting for a friend. Make a lot of frustrated sighs and look at your watch. Mutter aloud, “ugh I guess I’ll just start doing a little work while I wait”. And then lean over to the nearest customer and offer a snide, “Bankers, am I right??” because no one inherently likes bankers; even bankers don’t like bankers.

If you do end up purchasing a drink, it’s going to be a huge blow to your ego to be seen drinking out of a Starbucks© coffee cup. So do as I do: get a gigantic soda from your local big chain movie theater complex and slip the entire Starbucks® cup in there. Then instead of losing street cred due to your coffee choices, you can impress people with your ability to sip from what is obviously a year’s serving size of soda.

Otherwise you could just ignore everyone. I mean I don’t even drink coffee so who cares if they burn the beans.

Wi-Fi

Your ultimate goal in life is to look like some kind of touched soothsayer as you walk around your city, your phone authing to different free wi-fi hotspots as you go, since you’ve already joined them in the past. It’s basically like that scene with Cypher from The Matrix, except instead of saying “All I see are blonde, brunette, redhead…” you’d say “All I see is WPA2, open, WEP…”.

If it’s an open network, you’re good: just connect, fire up a VPN so jerkholes don’t gank your unsecured traffic, and you’re good.

If the coffee shop has a password on their wi-fi, you have to get creative. You have a couple options here.

Number one: you could just ask. Stand up from your seat and yell out to no one in particular, “DOES ANYBODY HAVE THE WIFI PASSWORD?” and then act like you’ve heard the cryptic string of characters they read back at you, go back to your computer, painstakingly type them in, and then immediately leave the cafe entirely when you realize what you typed was not at all correct and you don’t want to have to ask twice.

The real option, though, falls neatly in line with our driving philosophy in life: never talk to humans, if you can manage it. Did you know that the leading cause of death in humans are other humans? Pretty wild.

For these situations I like to check Yelp or Foursquare comments from my phone. Sometimes people will drop a password from three years ago that might still work. If that doesn’t succeed, it’s best to bring an industrial-sized spool of ethernet cabling with you and plug into your internet at home. Problem solved.

Leaving your laptop

Sometimes nature calls (and there’s no hotel around). Or you have to go up and order some more food or drink. Or the dude next to you smells and you need a breath of fresh air for a second. It sucks to have to pack up your laptop every time, but can you really trust the other humans in the cafe not to steal your shit when you’re not looking? After all, the leading cause of all robberies is other humans.

I like to try a decoy trip first. If there’s a bush or a table nearby, slowly drop a pen or reach down and tie your shoes. At that point, stay low and crawl to your designated hiding place. Remain in wait until someone either attempts to steal your laptop or until the coast is clear. (This could take several hours.) If a thief does come, jump out and yell “AHA! GOT YA!” The attempted criminal will become flustered, looking around frantically for an exit, until you collapse an arm around their shoulders and laugh it off, shrugging off any possible notion that you’d call the cops on them.

Once you’ve had a beer or two with the perpetrator and eased the tension a bit, call the authorities.

Bandwidth Courtesy

Many times when you’re in a cafe, the internet will either be not super great, or simply spread way too thin amongst you greedy internet-sucking leeches.

So I’ve made a tool to help you out: it’s called bandwidth-friends.

Once installed and ran, it’ll monitor the transfer of bandwidth on your machine. If you happen to go over the threshold — say, you were busy downloading pirated copies of the latest Gilmore Girls episodes — your Mac will, out loud, say the following words, with say(1):

ATTENTION ATTENTION I AM CURRENTLY EXPERIENCING A HIGHER THAN NORMAL BANDWIDTH VOLUME, PLEASE ACCEPT MY APOLOGIES WHILE I USE THE INTERNET SLIGHTLY AGGRESSIVELY RIGHT NOW HUGS AND KISSES

Worried about playing this at the appropriate volume in a crowded cafe? Don’t worry, that’s a perfectly normal worry!

bandwidth-friends will automatically turn your volume up as loud as possible — even if you’ve muted your computer! — to make sure that you are being extra nice while informing the rest of the cafe.

Together, we can all help each other out while on the internet.

Feed the demon

Not having a proper office can be difficult, but with these tips hopefully you’ll become a proper, caffeinated, horrible person, just like everyone else in normal offices.

Happy cafe surfing!

News stories from Wednesday 14 December, 2016

Favicon for A List Apart: The Full Feed 18:36 This week's sponsor: ZARGET » Post from A List Apart: The Full Feed Visit off-site link

ZARGET: Analytics Tool for designers. Capture exactly how users experience your website. Settle design debates with data.

News stories from Tuesday 13 December, 2016

Favicon for Kopozky 16:05 What Will I Be In The Year 2020 » Post from Kopozky Visit off-site link
Favicon for A List Apart: The Full Feed 16:00 Managing Ego » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: This is Part 2 of the series entitled, “Defeating Workplace Drama with Emotional Intelligence”.

We’re in an industry where we regularly hear that our ideas are bad. We can get yelled at for overlooking something, even if we didn’t know about it, and we frequently encounter threats to our ego that can turn any one of us into an anxious and irrational coworker.

Minimizing our exposure to ego-damaging situations can be valuable in preventing anxiety, but that’s sometimes beyond our control. Unfortunately, when threats can’t be controlled, confidence is the next thing to take a hit. Professional and personal self-worth may seem vulnerable, but they can also be reinforced and strengthened far in advance.

Client drama, ground zero

I shrunk in my chair as a client technical contact listed off everything he hated about the site I had just built. The list was not short, nor was it constructive. When it came time for him to make his recommendations, I went on the offensive and launched into my own opinions on how terrible and impossible his ideas were. By the end of the phone call, everyone was on edge and I was left with one desperate question: What just happened?

I found out the next day that the website I built was originally supposed to be an internal initiative, handled by the technical contact who had berated me. In short, his ego was bruised—and by the end of the phone call, my ego was bruised too. This brought out the worst in each of us. The result was a phone call full of drama that shall live on in infamy.

There were a few things wrong with that conversation. First, the technical contact clearly felt threatened by my website. But my history with this guy showed me that he felt threatened by most ideas we brought to him, so we also had to give some thought to where to draw the line with validating him on this. We should have employed a long-term strategy for strengthening that relationship by validating him at other times. Lastly, there are things I could have done to guard myself against irrationality and drama when that conversation turned south.

In short, everything went wrong in this scenario. That’s bad for me, but good for you, because it means we can learn a lot from looking at it. Let’s dig in.

Validating self esteem to prevent anxiety

Everyone responds to external feedback and affirmation—some more than others. So how do we tailor our feedback to avoid causing undue anxiety?

When you notice someone suddenly get worked up about something, go over what just happened. You probably introduced a threat. Did you propose a new idea? Did you point out a flaw in their idea? (Ideas are tied very closely to self esteem.) What was the idea? You’ve just pinpointed where their self esteem comes from.

Just like web professionals usually draw self esteem from the things that got them the job in the first place, marketing and account people do the same thing. Marketing people may prize their own creative ideas in a campaign, or their analytical skills when critiquing a campaign; account people often value their communication skills and ability to read people. When these skills are called into question, it produces anxiety, which can quickly lead to drama.

Think about that marketing person who can’t accept any creative idea as-is—who feels the need to make revisions to any idea that comes in. Creativity is the source of this person’s self esteem, so pushing back on those ideas without first validating them will introduce threat and result in anxiety.

What about that developer who won’t accept other people’s suggestions, and shoots down others’ ideas as impossible or too impractical? Problem solving and technical know-how are the sources of this person’s self esteem, and self esteem must be boosted by validating those strengths to get anywhere in a discussion of the merits of said ideas.

Ok, great, so we know where their self esteem is coming from. How do we validate these traits to prevent drama?

Consider the conversation I had with the client’s technical contact. When the technical contact began listing everything he hated about my site, I should have noticed that his own ideas were invalidated by the proposal of my ideas, which were being presented in the site I designed and built.

Rather than immediately protest (producing more threat), I should have asked questions related to his expertise with the client brand and business goals. I could have asked for help and affirmed his problem-solving ability (boosting self esteem and lowering threat) before re-asserting my own ideas. Had I taken this approach, there’s a good chance I could have learned something about the client in addition to calming down their technical contact.

Simply acknowledging others’ ideas and the thought that went into them can go a long way in validating sources of self esteem and quelling anxiety in the workplace.

When validation is not enough

There are times when there is such an emotional deficit created by a blow to the ego (possibly to an already-low self esteem) that no amount of validation will fix it. Dealing with a vulnerable or shattered self esteem can be difficult, and fixing it can be impossible. In those cases, no level of threat is tolerable and no level of self esteem boosting is sufficient.

Going back to my conversation with the client technical contact, what if he remained unsatisfied until he had the project back on his plate? Obviously, this is not a solution that’s good for either the agency, who needs the work, or the client, who determined that the agency was a better fit than their internal team.

In these situations, preventing or calming anxiety may be impossible because the problem is likely much bigger than the conversation at hand. It’s hard to apply a short-term solution to a long-term problem. In those cases, there are two things to do: minimize damage, and employ a long-term strategy to strengthen the relationship.

Minimizing damage means avoiding triggers and being as understanding as you can to the other person’s plight without sacrificing the project. If the other party feels that their ideas are being invalidated, it’s a sign that they feel that others aren’t taking their contributions seriously. (It may or may not be true in reality, but that’s how they feel.) That’s a pretty rough place to be no matter who you are. In that case, treat their contributions respectfully and be understanding when they get defensive about them.

Employing a my-way-or-the-highway authoritarian approach is the opposite of what we’re going for. This approach increases threat and can lead to a lot of ugly politics, with people going behind your back to gain support for their cause because they feel that any ideas brought to you are being invalidated. There are some situations where this is the only way forward, but those situations are few and far between—as well as rough and aggravating. Only go this route if you’ve exhausted all other options.

Read on for a long-term strategy to strengthen the relationship.

Using self esteem to build long-term relationships

As web professionals, we’re in the idea business—but so are the marketing people we often deal with. Those marketing folks will probably react poorly when their self esteem is threatened by conflicting and challenging ideas; but they usually react well when treated with deference and asked to explain their ideas and contribute their strengths. While this can be done on a case-by-case basis to prevent anxiety, it can also be done proactively to build better relationships with clients, coworkers, and others.

Once you’ve identified the source of a person’s self esteem, start deferring to them on that subject. Treat them as an expert on that subject. (In many cases, they probably are an expert on that subject.) Be open to their ideas and suggestions, and willing to integrate them into your own.

This process can take time, depending on the emotional deficit they begin with and your flexibility in welcoming their ideas. But over time, the beneficiary of your emotional toil will begin to see you as an ally and partner. This is a very good spot to be in.

They keyword here is intentionality. This process cannot happen on a happy accident—it takes work with planning and strategy. Obviously, the mental energy required for this means you won’t be able to do it for everyone you work with. Give some thought to which of your working relationships have the most strategic importance and which could most benefit from additional trust and respect. Chances are a few will pop out at you.

Being intentional about boosting the self esteem of your coworkers and clients not only makes them easier to work with, but creates relational equity that can be cashed in at a later time for deference, respect, and allegiance. Remember, the less you challenge things in a relationship, the more the other person will listen when you do. Though it takes time, it will make your job way easier in the long run.

Guarding yourself against anxiety

I wish I could say I didn’t personally need the advice in this section—but I do. There are times when we all do. Let’s be honest: we’ve all been that angry client technical contact at some point, and it certainly doesn’t help our careers. The two things we apply to others can also be applied to ourselves to prevent anxiety: we can reduce threat, and we can boost self esteem.

At first glance, it may seem impossible to reduce threat coming from others. We can’t just ask everyone to be nicer to our egos. But some perspective can go a long way in reducing perceived threat.

In the example above, I reacted poorly because the client’s technical contact got mad at me on the phone. He challenged all of my ideas and was doing all he could to dismiss them entirely. What I didn’t realize until much later was that he wasn’t mad at me, or my ideas—he was mad at an unstated problem. Maybe he had been burned by another agency’s incompetent development team in the past. Maybe he had major concerns that weren’t being heeded by his company’s marketing team. Ultimately, I don’t know what the problem was, but I realize now that he probably would have been mad no matter what or who we put in front of him.

What I find is that angry people aren’t always mad at me—many times, they’re mad at the problem. They’re challenging my ideas not because they doubt them, but because they want to make sure that they’re the best solution to the problem. When viewed this way, it’s a lot easier to avoid being defensive, because it’s not me versus you—it’s me and you versus the problem. It’s not easy to counteract that fight-or-flight response that gets triggered when people start challenging your ideas, but forcing yourself to do so usually goes a long way in helping to solve the problem without escalating into drama.

Having a healthy view of yourself and your capabilities can also guard against anxiety. It’s very important to have a self-image independent of anything else going on around you. There’s one big difference between healthy self esteem and unhealthy pride: social comparison. Healthy self esteem is knowing that you’re good at something and being content with that; unhealthy pride is knowing that you’re better than someone else.

Being better than someone else is actually a rather tenuous place to be. Comparing yourself to a moving target—which may be moving past you—usually results in you trying to hammer the target down into a place where you can move past it, either by putting the other person down or filling yourself with false confidence in your own ability. This is never a good thing.

If a discussion on how to solve a problem devolves into a binary battle of opinions with a winner and a loser, there are no winners because the original problem becomes the loser. It doesn’t matter if you beat the other guy if the solution suffers for it. Instead of seeking to be a winner, you should seek to be a problem-solver. In the web industry, ideas don’t mean anything unless they solve real-world problems. It is always worth giving up some or even all of your idea if it means improving the solution.

Recognizing the roots of anxiety

Workplace drama and the anxiety beneath its surface, far from being unpredictable and random occurrences, are often the result of deeply held fears and insecurities. Avoiding an unmitigated drama disaster means dealing with underlying issues like self esteem. It can be difficult to navigate these waters, and even more so to turn the tides and produce happier relationships—but the benefits far outweigh the costs.

News stories from Monday 12 December, 2016

Favicon for Zach Holman 01:00 Bring in the Goddamn Adults Already » Post from Zach Holman Visit off-site link

Like every crank who has been in the startup world for awhile, I’m starting to appreciate experience.

Tech’s disruption fetish

The tech industry pushes such a youngin’s narrative: take kids fresh from their second year of university, shove them into a venture-backed position, and kaPLOWEY! Money and fame rains from the sky! Money party time!

It makes sense, though: their half-semester course in undergraduate sociology definitely qualifies them to manage their employees’ livelihoods.

Most of this focus is on disruption, which, come to think about it, used to be a negative word back when it happened to your water supply or the regional power grid. Now it’s good to disrupt old industries, and we throw a lotta bright-eyed bushy-tailed kids on magazine covers to promote that. And some extent of it is true: younger perspectives lend themselves toward being ignorant enough to try something new, to rethink unchanged processes. Granted, that can go hand-in-hand with the “most startups fail” narrative, but who am I to get in front of a good cliché.

It’s different

The thing that kills me — and I hear it every few months — is when young startup founders make dumb decisions because they don’t have any historical context to inform their decisions. They haven’t been there before. And yeah, that tends to work for product, where you’re inventing something new, but there’s a really good chance you shouldn’t be experimenting with your people.

Time and time again, the young startup promotes their longest-tenured young engineer to become CTO of their 20-something startup. And it makes sense on the surface, because it’s their “best” engineer. And why not? They’ve been there for so long that they know the system they’ve built more than anyone else.

But now they have two problems: they lose their “best” engineer, and on top of that, they gain what’s probably a shit manager.

I’ve heard startups tackle this in all number of manners. One startup was confident when they said, “Yeah, we’ll send him to take management classes and spin him up to speed in no time; he’s a super fast learner”, neglecting to realize that he’s a fast learner when it comes to new programming languages, not understanding humankind.

Do you know who the best managers were early on at GitHub? The ones who had done it before, preferably for years, and preferably at companies who had a strong management culture (think Microsoft, for example).

Yes, you can A/B test managers and employees with satisfaction surveys to optimize over time. Yes, you can learn management on the job, starting from nothing. But you’d also be a fucking moron to rely upon that for the whole organization.

With product, if you deploy a breaking change, you can also usually roll back in minutes. Unless you’re building heart EKGs or something similarly mission critical, you can afford to be a bit cavalier. Okay, I’ll even say it: you can be disruptive. And disrupting product is lit, or whatever the kids in Brooklyn say these days.

Disrupting people is not lit. If you deploy a breaking change to your organization — i.e., hire an incompetent manager who is a huge dickbag — you can’t rollback the number of people who quit, go through real emotional issues, or are otherwise become dysfunctional in the organization.

The fresh-outta-college thing

This isn’t a young-versus-old issue, although that can inherently play a part. It’s a matter of experience, and being exposed to these things, either directly or even indirectly.

I’ve gotten asked a lot over the years whether someone should drop out to instead pursue their dreams of starting a startup. I kinda had some wiggle room initially in my responses, but now the question itself seems kinda mind-boggling to me.

Of course spend at least a couple years working for someone else. There’s such a bonkers amount of lame shit that you learn that will serve you in spades down the line: how does insurance work? How are salaries dealt with? How do good companies deal with firing people? Bad companies? How does product get built competently? What did you like about your experience, and what did you hate? A little bit goes a long way.

Follow people who have that experience

One of the tweets I’ve referred back to over and over again in conversation is this reply from Startup L. Jackson (may he rest in peace):

Sometimes you gotta fuck up — or get fucked up — to learn how to avoid making those same mistakes in the future. And then you’ll make more mistakes, and a few of your employees will run off and start their own thing, vowing to never make the mistakes you made. And then they’ll make their own mistakes.

It’s a wonderfully shitty cycle. 💖

News stories from Tuesday 06 December, 2016

Favicon for A List Apart: The Full Feed 16:00 Accessibility Whack-A-Mole » Post from A List Apart: The Full Feed Visit off-site link

I don’t believe in perfection. Perfection is the opiate of the design community.

Designers sometimes like to say that design is about problem-solving. But defining design as problem-solving is of course itself problematic, which is perhaps nowhere more evident than in the realm of accessibility. After all, problems don’t come in neat black-and-white boxes—they’re inextricably tangled up with other problems and needs. That’s what makes design so fascinating: experimentation, compromise, and the thrill of chasing an elusive sweet spot.

Having said that, deep down I’m a closet idealist. I want everything to work well for everyone, and that’s what drives my obsession with accessibility.

Whose accessibility, though?

Accessibility doesn’t just involve improving access for people with visual, auditory, physical, speech, cognitive, language, learning, and neurological difficulties—it impacts us all. Remember that in addition to those permanently affected, many more people experience temporary difficulties because of injury or environmental effects. Accessibility isn’t a niche issue; it’s an everyone issue.

There are lots of helpful accessibility guidelines in Web Content Accessibility Guidelines (WCAG) 2.0, but although the W3C is working to better meet the complex needs of neurodiverse users, there are no easy solutions. How do we deal with accessibility needs for which there are no definitive answers? And what if a fix for one group of people breaks things for another group?

That’s a big question, and it’s close to my heart. I’m dyslexic, and one of the recommendations for reducing visual stress that I’ve found tremendously helpful is low contrast between text and background color. This, though, often means failing to meet accessibility requirements for people who are visually impaired. Once you start really looking, you notice accessibility conflicts large and small cropping up everywhere. Consider:

  • Designing for one-handed mobile use raises problems because right-handedness is the default—but 10 percent of the population is left handed.
  • Giving users a magnified detailed view on hover can create a mobile hover trap that obscures other content.
  • Links must use something other than color to denote their “linkyness.” Underlines are used most often and are easily understood, but they can interfere with descenders and make it harder for people to recognize word shapes.

You might assume that people experiencing temporary or long-term impairment would avail themselves of the same browser accessibility features—but you’d be wrong. Users with minor or infrequent difficulties may not have even discovered those workarounds.

With every change we make, we need to continually check that it doesn’t impair someone else’s experience. To drive this point home, let me tell you a story about fonts.

A new font for a new brand

At Wellcome, we were simultaneously developing a new brand and redesigning our website. The new brand needed to reflect the amazing stuff we do at Wellcome, a large charitable organization that supports scientists and researchers. We wanted to paint a picture of an energetic organization that seeks new talent and represents broad contemporary research. And, of course, we had to do all of this without compromising accessibility. How could we best approach a rebrand through the lens of inclusivity?

To that end, we decided to make our design process as transparent as possible. Design is not a dark art; it’s a series of decisions. Sharing early and often brings the benefit of feedback and allows us to see work from different perspectives. It also offers the opportunity to document and communicate design decisions.

When we started showing people the new website, some of them had very specific feedback about the typeface we had chosen. That’s when we learned that our new headline font, Progress Two, might be less than ideal for readers with dyslexia. My heart sank. As a fellow dyslexic, I felt like I was letting my side down.

My entire career had been geared toward fostering accessibility, legibility, and readability. I’d been working on the site redevelopment for over a year. With clarity and simplicity as our guiding principles, we were binning jargon, tiny unreadable text, and decorative molecules.

And now this. Were we really going to choose a typeface that undid all of our hard work and made it difficult for some people to read? After a brief panic, I got down to some research.

So what makes type legible?

The short answer is: there is no right answer. A baffling and often contradictory range of research papers exists, as do, I discovered, companies trying to sell “reasonably priced” (read: extortionate) solutions that don’t necessarily solve anything.

Thomas Bohm offers a helpful overview of characters that are easily misrecognized, and the British Dyslexia Association (BDA) has published a list of guidelines for dyslexia-friendly type. The BDA guidelines on letterforms pretty much ruled out all of the fonts on our short list. Even popular faces like Arial and Helvetica fail to tick all the boxes on the BDA list, although familiar sans serifs do tend to test well, according to some studies (PDF).

And it’s not just dyslexia that is sensitive to typography; we recently had a usability testing participant who explained that some people on the autism spectrum struggle with certain fonts, too. And therein lies the problem: there’s a great deal of diversity within neurodiversity. What works for me doesn’t work for everyone with dyslexia; not everyone on the autism spectrum gives a flip about fonts, but some really do.

At first my research discouraged and overwhelmed me. The nice thing about guidelines, though, is that they give you a place to start.

Progress

Some people find fonts specifically designed for dyslexia helpful, but there is no one-size-fits-all solution. Personally, I find a font like Open Dyslexic tricky to read; since our goal was to be as inclusive as possible, we ultimately decided that Open Dyslexic wasn’t the right choice for Wellcome. The most practical (and universal) approach would be to build a standards-compliant site that would allow users to override styles with their own preferred fonts and/or colors. And indeed, users should always be able to override styles. But although customization is great if you know what works for you, in my experience (as someone who was diagnosed with dyslexia quite late), I didn’t always know why something was hard, let alone what might help. I wanted to see if there was more we could do for our users.

Mariane Dear, our senior graphic designer, was already negotiating with the type designer (Gareth Hague of Alias) about modifying some aspects of Progress Two. What if we could incorporate some of the BDA’s recommendations? What if we could create something that felt unique and memorable, but was also more dyslexia friendly? That would be cool. So that’s what we set out to do.

Welcome, Wellcome Bold

When I first saw Progress Two, I wasn’t particularly keen on it—but I had to admit it met the confident, energetic aspirations of our rebranding project. And even though I didn’t initially love it, I think our new customized version, Wellcome Bold, has “grown up” without losing its unique personality. I’ve come to love what it has evolved into.

We used the BDA’s checklist as a starting point to analyze and address the legibility of the letterforms and how they might be improved.

Illusion number 1


If uppercase I, lowercase l, and numeral 1 look too similar, some readers might get confused. We found that the capital I and lowercase l of Progress Two weren’t distinct enough, so Hague added a little hook to the bottom of the l.

Illustration showing examples of capital ‘I’, lowercase ‘l’, and numeral ‘1’Capital I, lowercase l, and numeral 1 show how Progress Two metamorphosed into Wellcome Bold. (All glyph illustrations by Eleanor Ratliff.)

Modern modem

In some typefaces, particularly if not set well, r and n can run together to appear to form an mmodern may be read as modem, for example. Breaking the flow between the two shapes differentiates them better.

Illustration showing how lowercase ‘r’ and ‘n’ were modified to prevent the two glyphs from running together when set next to each otherFrom Progress Two to Wellcome Bold: lowercase r and n were tweaked to prevent the two glyphs from running together when set next to each other.

Openings

Counters are the openings in the middle of letterforms. Generally speaking, the bigger the counters, the more distinct the letters.

Illustration showing counters in ‘b’, ‘a’, ‘e’, ‘o’, and ‘q’ in Wellcome BoldHighlighted counters in Wellcome Bold’s lowercase b, a, e, o, and q.

Mirroring

Because some people with dyslexia perceive letters as flipped or mirrored, the BDA recommends that b and d, and p and q, be easily distinguishable.

Illustration showing how lowercase ‘d’ and ‘b’ were modified to make them more easily distinguishable in Wellcome BoldLowercase d and b were modified to make them more easily distinguishable in Wellcome Bold.

Word shapes

Most readers don’t read letter by letter, but by organizing letterforms into familiar word shapes. We modified Progress Two not just to make things easier for readers who are dyslexic; we did it as part of a wider inclusive design process. We wanted to make accessibility a central part of our design principles so that we could create an easier experience for everyone.

Test, test, and test again

In the course of our usability testing, we had the good fortune to be able to work with participants with accessibility needs in each round, including individuals with dyslexia, those on the autism spectrum, and users of screen readers.

Once we started introducing changes, we were anxious to make sure we were heading in the right direction. Nancy Willacy, our lead user experience practitioner, suggested that a good way to uncover any urgent issues would be to ask a large number of respondents to participate in a survey. The media team helped us out by tweeting our survey to a number of charities focused on dyslexia, dyspraxia, autism, and ADHD, and the charities were kind enough to retweet us to their followers.

Although we realize that our test was of the quick-and-dirty variety, we got no feedback indicating any critical issues, which reassured us that we were probably on the right track. Respondents to the survey had a slight preference for the adjusted version of Progress Two over Helvetica (we chose a familiar sans serif as a baseline); the unadjusted version came in last.

Anyone can do it

Even if you don’t have a friendly type designer you can collaborate with to tailor your chosen fonts, you can still do a lot to be typographically accessible.

Type

When selecting a typeface, look for letterforms that are clear and distinct.

  • Look closely and critically. Keeping the checklists we’ve mentioned in mind, watch for details that could potentially trip readers up, like shapes that aren’t well differentiated enough or counters that are too closed.
  • To serif or not to serif? Some research has shown that sans serifs are easier to read on screen, since, especially at lower resolutions, serifs can get muddy, make shapes less distinct, or even disappear altogether. If your existing brand includes a typeface with fine serifs or ornamental details, use it sparingly and make sure you test it with a range of users and devices.
  • Use bold for emphasis. Some research has shown that italics and all-caps text reduce reading speed. Try using bold for emphasis instead.
  • Underline with care. Underlines are great for links, but a standard text-decoration underline obscures descenders. In the future, the text-decoration-skip property may be able to help with that; in the meantime, consider alternatives to the default.

Space

Think carefully about spaces between, around, and within letterforms and clusters of words.

Words

The words you use are just as important as what you do with them.

  • Keep it short. Avoid long sentences. Keep headings clear and concise.
  • Avoid jargon. Write for your audience and cut the jargon unless it’s absolutely necessary. Acronyms and academic terms that might be appropriate for a team of specialists would be totally out of place in a more general article, for example.

So everything’s fixed, right?

Nope.

There is no perfect typeface. Although we worked hard to improve the experience of the Wellcome site, some people will still struggle with our customized headline font, and with the Helvetica, Arial, sans-serif font stack we’re using for body text. However hard we try, some people may need to override defaults and choose the fonts and colors that work best for them. We can respect that by building sites that allow modification without breaking.

Pragmatic perfection

The trouble with expecting perfection in one go is that it can be tempting to take the safe route, to go with the tried and tested. But giving ourselves room to test and refine also gives us the freedom to take risks and try original approaches.

Putting ourselves out there can feel uncomfortable, but Wellcome wants to fund researchers that have the big ideas and the chutzpah to take big risks. So shouldn’t those of us building the site be willing to do the same? Yes, maybe we’ll make mistakes, but we’ll learn from them. If we had chosen a safe typeface for our headline font, we wouldn’t be having these conversations; we wouldn’t have done the research that led us to make changes; we wouldn’t discover new issues that failed to come up in any of our research.

The process sparked much debate at Wellcome, which opened doors to some intriguing opportunities. In the future, I won’t be so reticent about daring to try new things.

Additional resources

Favicon for Joel on Software 03:23 Oh look, a new site! » Post from Joel on Software Visit off-site link

I’ve moved to WordPress. There may be some bugs!

News stories from Monday 05 December, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: ENVATO ELEMENTS » Post from A List Apart: The Full Feed Visit off-site link

ENVATO ELEMENTS, the only subscription made with designers in mind. 9000+ quality fonts, graphics, templates and more. Get started today.

News stories from Monday 28 November, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: O’REILLY DESIGN CONFERENCE » Post from A List Apart: The Full Feed Visit off-site link

O’REILLY DESIGN CONFERENCE - get the skills and insights you need to design the products of the future. Save 20% with code ALIST

News stories from Tuesday 22 November, 2016

Favicon for A List Apart: The Full Feed 16:00 Insisting on Core Development Principles » Post from A List Apart: The Full Feed Visit off-site link

The web community talks a lot about best practices in design and development: methodologies that are key to reaching and retaining users, considerate design habits, and areas that we as a community should focus on.

But let’s be honest—there are a lot of areas to focus on. We need to put users first, content first, and mobile first. We need to design for accessibility, performance, and empathy. We need to tune and test our work across many devices and browsers. Our content needs to grab user attention, speak inclusively, and employ appropriate keywords for SEO optimization. We should write semantic markup and comment our code for the developers who come after us.

Along with the web landscape, the expectations for our work have matured significantly over the last couple of decades. It’s a lot to keep track of, whether you’ve been working on the web for 20 years or only 20 months.

If those expectations feel daunting to those of us who live and breathe web development every day, imagine how foreign all of these concepts are for the clients who hire us to build a site or an app. They rely on us to be the experts who prioritize these best practices. But time and again, we fail our clients.

I’ve been working closely with development vendor partners and other industry professionals for a number of years. As I speak with development shops and ask about their code standards, workflows, and methods for maintaining consistency and best practices across distributed development teams, I’m continually astonished to hear that often, most of the best practices I listed in the first paragraph are not part of any development project unless the client specifically asks for them.

Think about that.

Development shops are relying on the communications team at a finance agency to know that they should request their code be optimized for performance or accessibility. I’m going to go out on a limb here and say that shouldn’t be the client’s job. We’re the experts; we understand web strategy and best practices—and it’s time we act like it. It’s time for us to stop talking about each of these principles in a blue-sky way and start implementing them as our core practices. Every time. By default.

Whether you work in an internal dev shop or for outside clients, you likely have clients whose focus is on achieving business goals. Clients come to you, the technical expert, to help them achieve their business goals in the best possible way. They may know a bit of web jargon that they can use to get the conversation started, but often they will focus on the superficial elements of the project. Just about every client will worry more about their hero images and color palette than about any other piece of their project. That’s not going to change. That’s okay. It’s okay because they are not the web experts. That’s not their job. That’s your job.

If I want to build a house, I’m going to hire experts to design and build that house. I will have to rely on architects, builders, and contractors to know what material to use for the foundation, where to construct load-bearing walls, and where to put the plumbing and electricity. I don’t know the building codes and requirements to ensure that my house will withstand a storm. I don’t even know what questions I would need to ask to find out. I need to rely on experts to design and build a structure that won’t fall down—and then I’ll spend my time picking out paint colors and finding a rug to tie the room together.

This analogy applies perfectly to web professionals. When our clients hire us, they count on us to architect something stable that meets industry standards and best practices. Our business clients won’t know what questions to ask or how to look into the code to confirm that it adheres to best practices. It’s up to us as web professionals to uphold design and development principles that will have a strong impact on the final product, yet are invisible to our clients. It’s those elements that our clients expect us to prioritize, and they don’t even know it. Just as we rely on architects and builders to construct houses on a solid foundation with a firm structure, so should we design our sites on a solid foundation of code.

If our work doesn’t follow these principles by default, we fail our clients

So what do we prioritize, and how do we get there? If everything is critical, then nothing is. While our clients concentrate on colors and images (and, if we’re lucky, content), we need to concentrate on building a solid foundation that will deliver that content to end users beautifully, reliably, and efficiently. How should we go about developing that solid foundation? Our best bet is to prioritize a foundation of code that will help our message reach the broadest audience, across the majority of use cases. To get to the crux of a user-first development philosophy, we need to find the principles that have the most impact, but aren’t yet implicit in our process.

At a minimum, all code written for general audiences should be:

  • responsive
  • accessible
  • performant

More specifically, it’s not enough to pay lip service to those catch phrases to present yourself as a “serious” dev shop and stop there. Our responsive designs shouldn’t simply adjust the flow and size of elements depending on device width—they also need to consider loading different image sizes and background variants based on device needs. Accessible coding standards should be based on the more recent WCAG 2.0 (Level AA) standards, with the understanding that coding for universal access benefits all users, not just a small percentage (coupled with the understanding that companies whose sites don’t meet those standards are being sued for noncompliance). Performance optimization should think about how image sizes, scripts, and caching can improve page-load speed and decrease the total file size downloaded in every interaction.

Do each of these take time? Sure they do. Development teams may even need additional training, and large teams will need to be prescriptive about how that can be integrated into established workflows. But the more these principles are built into the core functions of all of our products, the less time they will take, and the better all of our services will be.

How do we get there?

In the long run, we need to adjust our workflows so that both front-end and backend developers build these best practices into their default coding processes and methodologies. They should be part of our company cultures, our interview screenings, our value statements, our QA testing scripts, and our code validations. Just like no one would think of building a website layout using tables and 1px spacer images anymore (shout out to all the old-school webmasters out there), we should reach a point where it’s laughable to think of designing a fixed-width website, or creating an image upload prompt without an alt text field.

If you’re a freelance developer or a small agency, this change in philosophy or focus should be easier to achieve than if you are part of a larger agency. As with any time you and your team expand and mature your skillsets, you will want to evaluate how many extra hours you need to build into the initial learning curves of new practices. But again, each of these principles becomes faster and easier to achieve once they’re built into the workflow.

There is a wealth of books, blogs, checklists, and how-tos you can turn to for reference on designing responsively, making sites accessible, and tuning for performance. Existing responsive frameworks can act as a starting point for responsive development. After developing the overarching layout and flow, the main speed bumps for responsive content arise in the treatment of tables, images, and multimedia elements. You will need to plan to review and think through how your layouts will be presented at different breakpoints. A tool like embedresponsively.com can speed the process for external content embeds.

Many accessibility gaps can be filled by using semantic markup instead of making every element a div or a span. None of the accessible code requirements should be time hogs once a developer becomes familiar with them. The a11y Project’s Web Accessibility Checklist provides an easy way for front-end developers to review their overall code style and learn how to adjust it to be more accessible by default. In fact, writing truly semantic markup should speed CSS design time when it’s easier to target the elements you’re truly focused on.

The more you focus on meeting each of these principles in the early stages of new projects, the faster they will become your default way of developing, and the time spent on them will become a default part of the process.

Maintaining focus

It’s one thing to tell your team that you want all the code they develop to be responsive, accessible, and performant. It’s another thing entirely to make sure it gets there. Whether you’re a solo developer or manage a team of developers, you will need systems in place to maintain focus. Make sure your developers have the knowledge required to implement the code and techniques that address these needs, and supplement with training when they don’t.

Write value statements. Post lists. Ask at every stage what can be added to the process to make sure these core principles are considered. When you hire new talent, you can add questions into the interview process to make sure your new team members are already up to speed and have the same values and commitment to quality from day one.

Include checkpoints within each stage of the design and development process to ensure your work continues to build toward a fully responsive, accessible, and performant end product. For example, you can adjust the design process to start with mobile wireframes to change team mindsets away from designing for desktop and then trying to backfill mobile and tablet layouts. Another checkpoint should be added when determining color palettes to test foreground and background color sets for accessible color contrast. Add in a step to run image files through a compressor before uploading any graphic assets. Ask designers to use webfonts responsibly, not reflexively. Set a performance budget, and build in steps for performance checks along the way. Soon, your team will simply “know” which features or practices tend to be performance hogs and which are lean. You will need to make sure testing and code reviews look for these things, too.

Nothing worth doing happens by accident. Every time we overlook our responsibilities as designers and developers because it’s faster to cut corners, our products suffer and our industry as a whole suffers. As web professionals, how we work and what we prioritize when no one’s looking make a difference in thousands of little ways to thousands of people we will never meet. Remember that. Our clients and our users are counting on us.

 

News stories from Tuesday 15 November, 2016

Favicon for A List Apart: The Full Feed 16:00 The Coming Revolution in Email Design » Post from A List Apart: The Full Feed Visit off-site link

Email, the web’s much maligned little cousin, is in the midst of a revolution—one that will change not only how designers and developers build HTML email campaigns, but also the way in which subscribers interact with those campaigns.

Despite the slowness of email client vendors to update their rendering engines, email designers are developing new ways of bringing commonplace techniques on the web to the inbox. Effects like animation and interactivity are increasingly used by developers to pull off campaigns once thought impossible. And, for anyone coming from the world of the web, there are more tools, templates, and frameworks than ever to make that transition as smooth as possible. For seasoned email developers, these tools can decrease email production times and increase the reliability and efficacy of email campaigns.

Perhaps more importantly, the email industry itself is in a state of reinvention. For the first time, email client vendors—traditionally hesitant to update or change their rendering engines—are listening to the concerns of email professionals. While progress is likely to be slow, there is finally hope for improved support for HTML and CSS in the inbox.

Although some problems still need to be addressed, there has never been a better time to take email seriously. For a channel that nearly every business uses, and that most consumers can’t live without, these changes signal an important shift in a thriving industry—one that designers, developers, and strategists for the web should start paying attention to.

Let’s look at how these changes are manifesting themselves.

The web comes to email

It’s an old saw that email design is stuck in the past. For the longest time, developers have been forced to revisit coding techniques that were dated even back in the early 2000s if they wanted to build an HTML email campaign. Locked into table-based layouts and reliant on inline styles, most developers refused to believe that email could do anything more than look serviceable and deliver some basic content to subscribers.

For a few email developers, though, frustrating constraints became inspiring challenges and the catalyst for a variety of paradigm-shifting techniques.

When I last wrote about email for A List Apart, most people were just discovering responsive email design. Practices that were common on the web—the use of fluid grids, fluid images, and media queries—were still brand new to the world of email marketing. However, the limitations of some email clients forced developers to completely rethink responsive email.

Until recently, Gmail refused to support media queries (and most embedded styles), leaving well-designed, responsive campaigns looking disastrous in mobile Gmail apps. While their recently announced update to support responsive emails is a huge step forward for the community, the pioneering efforts of frustrated email developers shouldn’t go unnoticed.

Building on the work first introduced by MailChimp’s Fabio Carneiro, people like Mike Ragan and Nicole Merlin developed a set of techniques typically called hybrid coding. Instead of relying on media queries to trigger states, hybrid emails are fluid by default, leaving behind fixed pixels for percentage-based tables. These fluid tables are then constrained to appropriate sizes on desktop with the CSS max-width property and conditional ghost tables for Microsoft Outlook, which doesn’t support max-width. Combined with Julie Ng’s responsive-by-default images, hybrid coding is an effective way for email developers to build campaigns that work well across nearly every popular email client.

<img alt="" src="" width="600" style="display: block; width: 100%; max-width: 100%; min-width: 100px; font-family: sans-serif; color: #000000; font-size: 24px; border="0";" />

Responsive-by-default images with HTML attributes and inline CSS.

More recently, two other methods have emerged that address the issues with mobile email using more advanced techniques. Both Rémi Parmentier’s Fab Four technique and Stig Morten Myre’s mobile-first approach take the concept of mobile-first development so common on the web and apply it to email. Instead of relying on percentage-based fluid tables, both techniques take advantage of the CSS calc function to determine table and table cell widths, allowing for more adaptable emails across a wide range of clients. And, in both cases, developers can largely drop the use of tables in their markup (save for Microsoft ghost tables), creating emails that hew closer to modern web markup.

Moving beyond responsive layouts, email designers are increasingly adding animation and interactivity to their campaigns, creating more engaging experiences for subscribers. Animated GIFs have long been a staple of email design, but CSS animations are becoming more prevalent. Basic transitions and stylistic flourishes like Email Weekly’s heart animation (scroll down) or Nest’s color-shifting background colors are relatively easy to implement, fall back gracefully when not supported, and give email designers more options to surprise and delight their audiences.

Image showing Nest’s keyframe-animation-driven shifting background colors.Nest’s keyframe-animation-driven shifting background colors. Image courtesy of Nest.

Combined with the checkbox hack and Mark Robbins’s punched card coding, CSS animations allow email developers to create highly interactive experiences for the inbox. While earlier examples of interactivity were reserved for elements like product carousels, people like Robbins and the Rebelmail team have started creating full-blown checkout experiences right in an email.

Image showing the different stages of Rebelmail’s interactive checkout email.The different stages of Rebelmail’s interactive checkout email. Image courtesy of Rebelmail.

Interactivity doesn’t have to be reserved for viewing retail products, though. At Litmus, animations and interactivity were used to provide a full product tour inside of an email.

Screenshot showing Litmus Buildeer—a code editor built for email design and development.An interactive product tour in an email. Image courtesy of Litmus.

In this case, interactivity was used to provide product education, allowing users to experience a product before they even got their hands on it. While similar educational effects have been achieved in the past with animated GIFs, the addition of truly interactive elements created an experience that elevated it above similar campaigns.

Finally, the web’s focus on accessibility is cropping up in email, too. Both table-based layouts and inconsistencies in support for semantic elements across email clients have contributed to a near-complete lack of accessibility for email campaigns. Advocates are now speaking out and helping to change the way developers build emails with accessibility in mind.

The use of role=presentation on tables in email is becoming more widespread. By including role=presentation on table elements, screen readers recognize that those tables are used for layout instead of presenting tabular data and skip right to the content of the campaign.

Developers are also embracing semantic elements like proper headings and paragraphs to provide added value for people with visual impairments. So long as you are careful to override default margins on semantic, block-level elements, designers can safely use those elements without worrying about broken layouts. It is now something that every email developer should be doing.

Combined with email’s focus on alternative text—widely used to combat email clients that disable images for security reasons—accessible tables and semantic elements are laying the foundation for more usable, accessible email campaigns. There’s still a huge amount of research and education needed around accessibility in email, but the email world is slowly catching up to that of the web.

All of these techniques, mostly commonplace on the web, are relatively new to the world of HTML email. Somes have been used on a limited scale, but they are on the verge of becoming mainstream. And, while animation and interactivity aren’t appropriate for every email campaign, they are valuable additions to the email toolbox. Taken together, they represent a massive shift in how developers and marketers approach HTML email and are changing the way subscribers think about the humble inbox.

Better tooling

If anything can be considered a constant on the web, it’s that web designers and developers love building tools and frameworks to (in theory) improve their workflows and the reliability of their code. Just like accessibility and interactivity, this focus on tooling and frameworks has been making its way into the email industry over the past few years.

Instead of relying on individual, locally saved, static HTML files, many email developers are now embracing not only GitHub to host and share code, but complete build systems to compile that code, as well. Tools like Grunt and Gulp are now in wider use, as are static site generators like Middleman.

Being able to focus on discrete components means developers no longer have to update multiple HTML files when managing large email programs. For teams in charge of dozens, if not hundreds, of different email templates, this is a godsend. Updating a logo in one place and having it propagate across different campaigns, for example, saves tons of time.

The use of build tools also opens up the possibility of hyperpersonalized campaigns: emails with custom content and custom layouts on a per-subscriber basis. Allowing build systems to handle the compilation of individual modules means that those modules can be pieced together in a virtually unlimited number of ways based on conditions set at the beginning of the build process. This moves personalization in email beyond basic name substitutions and gives email marketers an unbelievably powerful way to connect with subscribers and provide way more value than your typical “batch and blast” campaign.

Likewise, more email developers are relying on preprocessors like Sass and Less to speed up the development workflow. Controlling styles through variables, mixins, and logic can be extremely powerful. While CSS post processors aren’t in wide use, a few savvy email developers are now starting to take advantage of upcoming CSS features in their campaigns.

Email developers and designers working with smaller teams, or those less familiar with advanced tools like preprocessors and build tools, now have a plethora of HTML email templates and frameworks at their disposal. They range in complexity from simple, static HTML files that make customization easy to completely abstracted coding frameworks like MJML and Zurb’s Foundation for Emails 2. Both MJML and Foundation for Emails 2 introduce their own templating languages, allowing email developers to use markup closer to that found on the web, which is then compiled into more complex, table-based HTML.

<mjml>
  <mj-body>
    <mj-container>
      <mj-section>
        <mj-column>
          <mj-text>Hello World!</mj-text>
        </mj-column>
      </mj-section>
    </mj-container>
  </mj-body>
</mjml>

An example of MJML’s templating language, which compiles to table-based markup.

One area that still needs improvement is testing. While tools like Litmus have vastly improved the experience of testing static emails across clients, interactive emails present new challenges. Since testing services currently return static screenshots taken from the inbox, access to devices is crucial for teams working on interactive campaigns. Although a few people are coming up with novel approaches to testing interactive emails (most notably Cyrill Gross’s use of WebKit browsers and clever JavaScript), tooling around interactive email testing will need to improve for more email developers to adopt some of the techniques I describe here.

A seat at the table

Two of the most exciting developments in the email world are the recent Microsoft and Litmus partnership and Gmail’s announcement of support for media queries.

Due to their typically abysmal support for HTML and CSS (especially the box model and floats), the many variations of Outlook have long been the biggest thorn in email developers’ sides. Outlook is the primary reason that emails use tables for layout.

Now, though, for the first time, Microsoft is reaching out to the email community to document bugs and rendering problems in order to guide future development efforts and improve the rendering engines underpinning their email clients. While we’ll still have to rely on tables for the foreseeable future, this is a good indicator that the email community is moving closer to some form of standards, similar to the web in the early 2000s. I don’t think we’ll ever see standards as widely propagated across email clients as they are on the web, but this is the first step toward better HTML and CSS support for email developers.

One likely result of the Microsoft/Litmus partnership is that more email client vendors will open up lines of communication with the email design industry. With any luck, and a lot of work, Microsoft will be the first of many vendors to sit down at the table with email designers, developers, and marketers in order to improve things not only for email professionals, but also for the subscribers we serve. There are already signs that things are getting better beyond Microsoft’s promise to improve.

Gmail, one of the more problematic email clients, recently updated their rendering engine to support display: none;—an unprecedented move from a team that is historically unsympathetic to the concerns of the email community. Email developers were in for an even bigger surprise from the Gmail team when they announced that they will be supporting media queries and embedded styles, too. While the hybrid coding approach mentioned earlier is still useful for addressing some email clients, this change means that it is now easier than ever to apply the principles of responsive web design—fluid grids, fluid images, and media queries—to HTML email campaigns.

Perhaps more interesting is Gmail’s added support for embedded CSS and element, class, and ID selectors. With this one change, embedded styles will be nearly universally supported—meaning that email designers will no longer be bound to inline styles and all the headaches they bring. Emails will now be easier to design, develop, and maintain. The lighter code base and more familiar style of writing CSS means that many of the blockers for web developers taking email seriously will be removed.

Beyond rallying around improved support for HTML and CSS, the email community itself is thriving. I remember the dark days—really only a few years ago—of email design, when it was extraordinarily difficult to find reliable information about how to build email campaigns, let alone connect with others doing the same. Today, people interested in email have a large and growing community to turn to for help. More marketers, designers, and developers are sharing their work and opinions, contributing to a discourse that is helping to shape the industry in new and interesting ways.

Perhaps more importantly, designers and developers are beginning to understand that working with email is a viable career option. Instead of relegating email to one more task as a web dev, many are now taking up the mantle of the full-time email developer.

Now’s the time

Where once there was just darkness and Dreamweaver, the email world is brightening with the light of a growing community, better tools, and amazing techniques to animate a traditionally static medium. And, with the increasing support of email client vendors, we can finally see the flicker of email standards way off on the horizon.

While some folks have expressed emotions ranging from amusement to scorn when discussing email marketing, no one can take it for granted anymore. Subscribers love email, even if you don’t. Email is routinely the most effective digital marketing channel. Companies and teams need to embrace that fact and take email seriously. Fortunately, now’s the perfect time to do that. Never have there been more tools, resources, and people dedicated to making email better.

The revolution in email is bound to be a slow one, but make no mistake: it’s coming. The web is leaking into the inbox. If you can’t keep up, your campaigns (and you) will be left behind.

News stories from Monday 14 November, 2016

Favicon for A List Apart: The Full Feed 06:01 This week's sponsor: ADOBE XD » Post from A List Apart: The Full Feed Visit off-site link

ADOBE XD BETA, the only all-in-one solution for designing, prototyping, and sharing experiences for websites and mobile apps.

News stories from Monday 07 November, 2016

Favicon for Kopozky 14:51 Doing It Right » Post from Kopozky Visit off-site link

News stories from Tuesday 01 November, 2016

Favicon for A List Apart: The Full Feed 15:00 Let Emotion Be Your Guide » Post from A List Apart: The Full Feed Visit off-site link

We were sitting in a market research room in the midst of a long day of customer interviews. Across from us, a young mother was telling us about her experience bringing her daughter into the ER during a severe asthma attack. We had been interviewing people about their healthcare journeys for a large hospital group, but we’d been running into a few problems.

First, the end-goal of the interviews was to develop a strategy for the hospital group’s website. But what we’d discovered, within the first morning of interviews aimed at creating a customer journey map, was that hospital websites were part of no one’s journey. This wasn’t wildly surprising to us—in fact it was part of the reason we’d recommended doing customer journey mapping in the first place. The hospital had a lot of disease content on their site, and we wanted to see whether people ever thought to access that content in the course of researching a condition. The answer had been a resounding no. Some people said things like, “Hmm, I’d never think to go to a hospital website. That’s an interesting idea.” Others didn’t even know that hospitals had websites. And even though we’d anticipated this response, the overwhelming consistency on this point was starting to freak out our client a little—in particular it started to freak out the person whose job it was to redesign the site.

The second issue was that our interviews were falling a little flat. People were answering our questions but there was no passion behind their responses, which made it difficult to determine where their interactions with the hospital fell short of expectations. Some of this was to be expected. Not every customer journey is a thrill ride, after all. Some people’s stories were about mundane conditions. I had this weird thing on my hand, and my wife was bugging me to get it checked out, so I did. The doctor gave me cream, and it went away, was one story. Another was from someone with strep throat. We didn’t expect much excitement from a story about strep throat, and we didn’t get it. But mixed in with the mundane experiences were people who had chronic conditions, or were caregivers for children, spouses, or parents with debilitating diseases, or people who had been diagnosed with cancer. And these people had been fairly flat as well.

We were struggling with two problems that we needed to solve simultaneously. First: what to do with the three remaining days of interviews we had lined up, when we’d already discovered on the morning of day one that no one went to hospital websites. And second: how to get information that our client could really use. We thought that if we could just dig a little deeper underneath people’s individual stories, we could produce something truly meaningful for not only our client, but the people sitting with us in the interview rooms.

We’d been following the standard protocol for journey mapping: prompting users to tell us about a specific healthcare experience they’d had recently, and then asking them at each step what they did, how they were feeling and what they were thinking. But the young mother telling us about her daughter’s chronic asthma made us change our approach.

“How were you feeling when you got to the ER?” we asked.

“I was terrified,” she said. “I thought my daughter was going to die.” And then, she began to cry. As user experience professionals we’re constantly reminding ourselves that we are not our users; but we are both parents and in that moment, we knew exactly what the woman in front of us meant. The entire chemistry of the room shifted. The interview subject in front of us was no longer an interview subject. She was a mother telling us about the worst day of her entire life. We all grabbed for the tissue box, and the three of us dabbed at our eyes together.

And from that point on, she didn’t just tell us her story as though we were three people sitting in front of a two-way mirror.  She told us her story the way she might tell her best friend.

We realized, in that interview, that this was not just another project. We’ve both had long careers in user research and user experience, but we’d never worked on a project that involved the worst day of people’s lives. There might be emotion involved in using a frustrating tool at work or shopping for the perfect gift, but nothing compares to the day you find yourself rushing to the emergency room with your child.

So we decided to throw out the focus on the hospital website, concentrate on where emotion was taking us, and trust that we would be able to reconcile our findings with our client’s needs. We, as human beings, wanted to hear other human beings tell us about the difficulties of caring for a mother with Alzheimer’s disease. We wanted to know what it felt like to receive a cancer diagnosis after a long journey to many doctors across a spectrum of specialties. We wanted to understand what we could do, in any small way, to help make these Worst Days minutely less horrible, less terrifying, and less out-of-control. We knew that the client was behind the two-way mirror, concerned about the website navigation, but we also knew that we were going to get to someplace much more important and meaningful by following wherever these stories took us.

We also realized that not all customer journeys are equal. We still wanted to understand what people’s journeys with strep throat and weird hand rashes looked like, because those were important too. Those journeys told us about the routine issues that we all experience whenever we come into contact with the medical establishment—the frustration of waiting endlessly at urgent care, the annoyance of finding someone who can see you at a time when you can take off from work, the importance of a doctor who listens. But we also wanted to get to the impassioned stories where the stakes and emotions were much higher, so we adjusted our questioning style accordingly. We stuck to our standard protocol for the routine medical stories. And for the high-stakes journeys, the ones that could leave us near tears or taking deep breaths at the end of the interview, we proceeded more slowly. We gave our interview subjects room to pause, sigh, and cry. We let there be silence in the room. We tried to make it not feel weird for people to share their most painful moments with two strangers.

When we completed our interviews at the end of the week, we had an incredibly rich number of stories to draw from—so many, in fact, that we were able to craft a digital strategy that went far beyond what the hospital website would do. (Website? We kept saying to ourselves. Who cares about the website?) We realized that in many ways, we were limiting ourselves by thinking about a website strategy, or even a digital strategy. By connecting with the emotional content of the conversations, we started to think about a customer strategy—one that would be medium-agnostic.

In Designing for Emotion, Aarron Walter encourages us to “think of our designs not as a façade for interaction, but as people with whom our audience can have an inspired conversation.” As we moved into making strategic recommendations, we thought a lot about how the hospital (like most hospitals) interacted with their patients as a bureaucratic, depersonalized entity. It was as though patients were spilling over with a hundred different needs, and the hospital group was simply silent. We also thought about what a helpful human would do at various stages of the journey, and found that there were multiple points where pushing information out to customers could make a world of difference.

We heard from people diagnosed with cancer who said, “After I heard the word ‘cancer’ I didn’t hear anything else, so then I went home and Googled it and completely panicked.” So we recommended that the day after someone gets a devastating diagnosis like that, there is a follow-up email with more information, reliable information resources, and videos of other people who experienced the same thing and what it was like for them.

We heard from people who spent the entire day waiting for their loved ones to get out of surgery, not knowing how much longer it would take, and worried that if they stepped out for a coffee, they would miss the crucial announcement over the loudspeaker. As a result, we proposed that relatives receive text message updates such as, “Your daughter is just starting her surgery. We expect that it will take about an hour and a half. We will text you again when she is moved to the recovery room.”

The stories were so strong that we believed they would help our client refocus their attention away from the website and toward the million other touchpoints and opportunities we saw to help make the worst day of people’s lives a little less horrible.

And for the most part, that is what happened. We picked a few journeys that we thought provided a window on the range of stories we’d been hearing. As we talked through some of the more heart-rending journeys there were audible gasps in the room: the story of a doctor who had refused to see a patient after she’d brought in her own research on her daughter’s condition; a woman with a worsening disease who had visited multiple doctors to try to get a diagnosis; a man who was caring for his mother-in-law, who was so debilitated by Alzheimer’s that she routinely tried to climb out the second floor bedroom window.

In Design for Real Life, Sarah Wachter-Boettcher and Eric Meyer note that “the more users have opened up to you in the research phase” the more realistic your personas can be. More realistic personas, in turn, make it easier to imagine crisis points. And this was exactly what began to unfold as we shared our user journeys. As we told these stories, we felt a shift in the room. The clients started to share their own unforgettable healthcare experiences. One woman pulled out her phone and showed us pictures of her tiny premature infant, wearing her husband’s wedding ring around her wrist as she lay there in an incubator, surrounded by tubes and wires. When we took a break we overheard a number of people on the client side talking over the details of these stories and coming up with ideas for how they could help that went so beyond the hospital website it was hard to believe that had been our starting point. One person pointed out that a number of journeys started in Urgent Care and suggested that perhaps the company should look at expanding into urgent care facilities.

In the end, the research changed the company’s approach to the site.

“We talked about the stories throughout the course of the project,” one of our client contacts told me. “There was so much raw humanity to them.” A year after the project wrapped up (even though due to organizational changes at the hospital group our strategy recommendations have yet to be implemented), our client quickly rattled off the names of a few of our customer types. It is worth noting, too, that while our recommendations went much farther than the original scope of the project, the client appreciated being able to make informed strategic decisions about the path forward. Their immediate need was a revamped website, but once they understood that this need paled in comparison to all of the other places they could have an impact on their customers’ lives, they began talking excitedly about how to make this vision a reality down the road.

For us, this project changed the way we conceptualize projects, and illustrated that the framework of a website strategy or even “digital” strategy isn’t always meaningful. Because as the digital world increasingly melds with the IRL world, as customers spend their days shifting between websites, apps, texting, and face-to-face interactions, it becomes increasingly important for designers and researchers to drop the distinctions we’ve drawn around where an interaction happens, or where emotion spikes.

Before jumping in however, it is important to prep the team about how, and most importantly, why your interview questions probe into how customers are feeling. When you get into the interview room, coaxing out these emotional stories requires establishing emotional rapport quickly, and making it a safe place for participants to express themselves.

Being able to establish this rapport has changed our approach to other projects as well—we’ve seen that emotion can play into customer journeys in the unlikeliest of places. On a recent project for a client who sells enterprise software, we interviewed a customer who had recently gone through a system upgrade experience which affected tens of thousands of users. It did not go well and he was shaken by the experience. “The pressure on our team was incredible. I am never doing that ever again,” he said. Even for this highly technical product, fear, frustration, anger, and trust were significant elements of the customer journey. This is a journey where a customer has ten thousand people angry at him if the product he bought does not perform well, and he could even be out of a job if it gets bad enough. So while the enterprise software industry doesn’t exactly scream “worst day of my life” in the same way that hospitals do, emotion can run high there as well.

We sometimes forget that customers are human beings and human beings are driven by emotion, especially during critical life events. Prior to walking into the interview room we’d thought we might unearth some hidden problems around parking at the ER, navigating the hospital, and, of course, issues with the website content. But those issues were so eclipsed by all of the emotions surrounding a hospital visit that they came to seem irrelevant. Not being able to find parking at the ER is annoying, but more important was not knowing what you were supposed to do next because you’d just been told you have cancer, or because you feared for your child’s life. By digging deeper into this core insight, we were able to provide recommendations that went beyond websites, and instead took the entire human experience into account.

For researchers and designers tasked with improving experiences, it is essential to have an understanding of the customer journey in its full, messy, emotional agglomeration. Regardless of the touchpoint your customer is interacting with, the emotional ride is often what ties it all together, particularly in high-stakes subject matter. Are your client’s customers likely to be frustrated, or are they likely to be having the worst day of their lives? In the latter types of situations, recognize that you will get much more impactful insights when you address the emotions head-on.

And when appropriate, don’t be afraid to cry.

Favicon for A List Apart: The Full Feed 15:00 Awaken the Champion A/B Tester Within » Post from A List Apart: The Full Feed Visit off-site link

Athletes in every sport monitor and capture data to help them win. They use cameras, sensors, and wearables to optimize their caloric intake, training regimens, and athletic performance, using data and exploratory thinking to refine every advantage possible. It may not be an Olympic event (yet!), but A/B testing can be dominated the same way.

I talked to a website owner recently who loves the “always be testing” philosophy. He explained that he instructs his teams to always test something—the message, the design, the layout, the offer, the CTA.

I asked, “But how do they know what to pick?” He thought about it and responded, “They don’t.”

Relying on intuition, experienced as your team may be, will only get you so far. To “always test something” can be a great philosophy, but testing for the sake of testing is often a massive waste of resources—as is A/B testing without significant thought and preparation. 

Where standard A/B testing can answer questions like “Which version converts better?” A/B testing combined with advanced analyses gives you something more important—a framework to answer questions like “Why did the winning version convert better?”

Changing athletes, or a waste of resources?

Typical A/B testing is based on algorithms that are powered by data during the test, but we started trying a different model on our projects here at Clicktale, putting heavy emphasis on data before, during, and after the test. The results have been more interesting and strategic, not just tactical.

Let’s imagine that Wheaties.org wants to reduce bounce rate and increase Buy Now clicks. Time for an A/B test, right?

The site’s UX lead gets an idea to split test their current site, comparing versions with current athletes to versions featuring former Olympians.

Wheaties Page DesignWheaties page design.

But what if your team monitored in-page visitor behavior and saw that an overwhelming majority of site visitors do not scroll below the fold to even notice the athletes featured there?

Now the idea of testing the different athlete variants sounds like a waste of time and resources, right?

But something happens when you take a different vantage point. What if your team watched session replays and noticed that those who do visit the athlete profiles tend to stay on the site longer and increase the rate of “Buy Now” clicks exponentially? That may be a subset of site visitors, but it’s a subset that’s working how you want.

If the desired outcome is to leverage the great experiences built into the pages, perhaps it would be wise to bring the athlete profiles higher. Or to A/B test elements that should encourage users to scroll down.

In our experience, both with A/B testing our own web properties and in aggregating the data of the 100 billion in-screen behaviors we’ve tracked, we know this to be true: testing should be powerful, focused, and actionable. In making business decisions, it helps when you’re able to see visual and conclusive evidence.

Imagine a marathon runner who doesn’t pay attention to other competitors once the race begins. Now, think about one who paces herself, watches the other racers, and modifies her cadence accordingly.

By doing something similar, your team can be agile in making changes and fixing bugs. Each time your team makes an adjustment, you can start another A/B test ... which lets you improve the customer experience faster than if you had to wait days for the first A/B test to be completed.

The race is on

Once an A/B test is underway, the machines use data-based algorithms to determine a winner. Based on traffic, conversion rate, number of variations, and the minimum improvement you want to detect, the finish line may be days or weeks away. What is an ambitious A/B tester to do?

Watch session replay of each variation immediately, once you’ve received a representative number of visitors. Use them to validate funnels and quickly be alert to any customer experience issues that may cause your funnels to leak.

Focus on the experience. Understanding which user behavior dominates each page is powerful, internalizing why users are behaving as they are enables you to take corrective actions mid-course and position yourself properly.

The next test

In your post-test assessments, again use data to understand why the winning variation succeeded with its target audience. Understanding the reason can help you prioritize future elements to test.

For example, when testing a control with a promotional banner (that should increase conversions) against a variation without a promotion, a PM may conclude that the promotion is ineffective when that variation loses.

Studying a heatmap of the test can reveal new insights. In this example, conversions were reduced because the banner pushed the “buy now” CTA out of sight.

Example of A/B testing on mobile devicesExample of A/B testing on mobile devices.

In this case, as a next test, the PM may decide not to remove the banner, but rather to test it in a way that keeps the more important “buy now” CTA visible. There is a good chance such a combination will yield even better results.

There are plenty of other examples of this, too. For instance, the web insights manager at a credit card company told me that having the aggregate data, in the form of heatmaps, helps him continually make more informed decisions about this A/B testing. In their case, they were able to rely on data that indicated they could remove a content panel without hurting their KPIs.

Another one of our customers, GoDaddy, was able to increase conversions on its checkout page by four percent after running an A/B test. “With our volume, that was a huge, huge increase…. We also tripled our donations to Round Up for Charity,” said Ana Grace, GoDaddy’s director of ecommerce, global product management. But the optimization doesn’t stop once a test is complete; GoDaddy continues to monitor new pages after changes, and sometimes finds additional hypotheses that require testing.

What it takes to go for the gold

I was not blessed with the natural athletic ability of an Olympian, but when it comes to A/B testing web assets and mobile apps, I have what I need to determine which version will be the winner. The powerful combination of behavioral analytics and big data gives athletes the knowledge they need to make the most of their assets, and it can do the same for you.

News stories from Tuesday 25 October, 2016

Favicon for A List Apart: The Full Feed 15:00 Network Access: Finding and Working with Creative Communities » Post from A List Apart: The Full Feed Visit off-site link

A curious complaint seems to ripple across the internet every so often: people state that “design” is stale. The criticism is that no original ideas are being generated; anything new is quickly co-opted and copied en-masse, leading to even more sterility, conceptually. And that leads to lots of articles lamenting the state of the communities they work in.

What people see is an endless recycling within their group, with very little bleed-over into other disciplines or networks. Too often, we speak about our design communities and networks as resources to be used, not as groups of people.

Anthony McCann describes the two main ways we view creative networks and the digital commons:

We have these two ways of speaking: commons as a pool of resources to be managed, and commons as an alternative to treating the world as made up of resources.

One view is that communities are essentially pools of user-generated content. That freely available content is there to be mined—the best ideas extracted and repackaged for profit or future projects. This is idea as commodity, and it very conveniently strips out the people doing the creating, instead looking at their conceptual and design work as a resource.

Another way is to view creative networks as interdependent networks of people. By nature, they cannot be resources, and any work put into the community is to sustain and nourish those human connections, not create assets. The focus is on contributing.

A wider view

By looking at your design communities as resources to be mined, you limit yourself to preset, habitual methods of sharing and usage. The more that network content is packaged for sale and distribution, the less “fresh” it will be. In Dougland Hine’s essay Friendship is a Commons, he says when we talk enthusiastically about the digital commons these days, we too often use the language of resource management, not the language of social relations.

Perhaps we should take a wider, more global view.

There are numerous digital design communities across the world; they are fluid and fresh, and operate according to distinct and complex social rules and mores. These designers are actively addressing problems in their own communities in original ways, and the result is unique, culturally relevant work. By joining and interacting with them—by accessing these networks—we can rethink what the design community is today.

Exploring larger communities

There are a number of creative communities I’ve become a part of, to varying degrees of attention. I’ve been a member of Behance for almost 10 years (Fig. 1), back when it was something very different (“We are pleased to invite you to join the Behance Network, in partnership with MTV”).

Screenshot of an old Behance Network pageFig. 1: Screenshot of the Behance creative community website in 2009. Source: belladonna

While I lived in Japan, Behance was a way for me to learn new digital design techniques and participate in a Western-focused, largely English speaking design network. As time has gone on, it’s strange that I now use it almost exclusively to see what is happening outside the West.

Instagram, Twitter, and Ello are three mobile platforms with a number of features that are great for collecting visual ideas without the necessity of always participating. The algorithms are focused on showing more of what I have seen—the more often I view work from Asian and African designers and illustrators, the more often I discover new work from those communities. While interesting for me, it does create filter bubbles, and I need to be careful of falling into the trap of seeing more of the same.

There is, of course, a counter-reaction to the public, extractive nature of these platforms—the rise of “Slack as community.” The joke about belonging to 5-10 different Slack groups is getting old, but illustrates a trend in the industry during the past year or so. I see this especially with designers of color, where the firehoses of racist/sexist abuse on open digital networks means that creativity is shelved in favor of simple preservation. Instead, we move, quietly and deliberately, to Slack, a platform that is explicit in its embrace of a diverse usership, where the access is much more tightly controlled, and where the empathy in design/dev networks is more readily shared and nurtured.

Right now, these are the creative platforms where I contribute my visual thinking, work, and conversations toward addressing messy visual questions—interactive ideas that assume a radically different way of viewing the world. There are, of course, others.

Exploring visual design alternatives

In Volume II of Mawahib (a series of books that showcase Arab illustrators, photographers, and graphic designers), we see one of these design communities compiled and printed, an offline record of a thriving visual network (Fig. 2).

Photograph of printed book interior and coverFig. 2: Page spreads from the Mawahib book, showcasing Arab illustration and design work

And perhaps it is in the banding together that real creative change can happen. I was fascinated to read this article about an illustration collective in Singapore. At 7 years old, it’s reportedly the longest running drawing event in Singapore. Michael Ng says, “Many people don’t know illustrators like us exist in Singapore and they’re amazed. Companies have also come up to hire us for work because of the event. We also network amongst ourselves, passing on opportunities and collaborations.” Comments like this show that there are thriving visual design scenes worldwide, ones that collaborate internally, and work for exposure and monetary gain externally.

Illustrated poster promoting an eventFig. 3: Poster from the Organisation of Illustrators Council in Singapore, advertising one of their collaborative sketching nights

UX research that builds community

Earlier in this article, we started by looking at the different ways people view existing creative communities. But what about people who create new ones? Here, once again, we have designers and strategists who use careful cultural research to create and develop sustainable digital networks, not simply resource libraries.

First, let’s look at the pilot of My Voice, an open source medical tool developed at Reboot. The residents of Wamba, a rural area in Nasarawa State, Nigeria, struggled to find a way to communicate with their healthcare providers. Reboot saw an opportunity to develop an empowering, responsive platform for the community, a way for people to share feedback with clinics and doctors in the area.

After a nine-week trial of the platform and software, the residents of Wamba saw the clinics begin making small changes to how they communicated—things like better payment info and hours of operation. The health department officials in the area also saw a chance to better monitor their clinics and appear more responsive to their constituents. What began as a way to report on clinic status and quality became a way for the community and local government to improve together.

Photo of two people facing one another; one is a woman wearing a black headwrap and a red sweaterFig. 4: Interviews with community residents for the MyVoice medical app

In another project, a group of researchers worked with a community in South Africa’s Eastern Cape to design and test mobile digital storytelling. Their experience creating a storytelling platform that did not follow European narrative tradition is enlightening, and hits on a key framing in line with how the people in Ndungunyeni view creative networks (Fig. 4).

Contrary to their initial ideas, the UX researchers found that storytelling ”...as an individual activity is discordant with villagers’ proximity, shared use of phones and communication norms. They devote significant time exchanging views in meetings and these protocols of speaking and listening contribute to cohesion, shared identity and security.”

Image of two planning documents presenting an arrangement of photos and digital media viewing device controls, with lines pointing to various photos and device control icons on one end, and to paragraphs of text on the otherFig 5: Mobile digital storytelling prototype (left) and story recording UI (right)

In both of these examples, we see new creative networks relying on linked social systems and cues in order to thrive. Most importantly, they rely on reciprocation—the trade of ideas, whether there is immediate personal benefit or not. Each of the participants—the community members, the UX designers, the clinics, and the local government— was able to collaborate on a common goal. Simply-crafted technology and UX made this possible, even in rural areas with little cellular connectivity. They all contributed, not looking to extract value, but to add it; they used these networking tools to deepen their interactions with others.

Building alternatives to current networks

Almost every project we work on as designers would certainly benefit from alternative viewpoints. That can be hard to set up, however, and collaborating with designers and developers outside your immediate circle may seem scary at first. Keep in mind that the goal is to add value to others’ networks and build interpersonal connections. This is the only way that we keep the creative ideas fresh.

Starting with freelance and project work

Sometimes the simplest way to access different creative circles is simply to pay for project work.  A great example is Karabo Moletsane’s project for Quartz Africa. An accomplished illustrator from South Africa, Moletsane recently did a set of 32 wonderful portraits for the Quartz Africa Innovators 2016 Series (Fig. 6). When I asked Moletsane about how she got the illustration job, she said it came via her work on AfricanDigitalArt.com. Moletsane also said she regularly posts work on her Instagram and Behance, making Quartz’s choice to work with this talented South African for a global series on African innovators a no-brainer.

A combined graphic. On the left is a piece of contemporary artwork depicting the portrait of a woman. On the right is a piece displaying 32 portraits in a similar style, arranged in rows and columns.Fig. 6: Karabo Moletsane’s full series of 32 African Innovators, for Quartz Magazine

Hiring and team-building from different networks

Sometimes, shorter freelance projects won’t give you long-term quality access to new design communities and ideas. Sometimes you need to bring people onto your team, full-time. Again, I point out what Dougland Hine says regarding the ways digital communities can work:

...people have had powerful experiences of what it means to come together, work and build communities [but] the new forms of collaboration easily turn into new forms of exploitation…

Instead of looking for short-term access, hiring and developing team members from other networks can be a powerful alternative. Tyler Kessler, the CEO of Lumogram in St. Louis, recently wrote about hiring a new head of development based in Nigeria, and what it has meant to his company. He used Andela, a startup that is training and hiring out a new generation of developers from Nigeria.

Collaboration around specific Ideas

Your contributions to networks also need not be permanent or rigid. There are numerous opportunities to join collectives, or working groups, that build more ephemeral networks around specific issues. One such project, by the DESIS Cluster Collective (pdf), was set up “to investigate which new services, tools, and solutions we can design together with the elderly, when thinking about our future society.” The breadth of ideas is astounding, from systems for healthier eating, to mini-parks within urban areas for seniors to hang out in. Each team involved contributed deep user research, information design, and cultural cues to propose new ways for our elderly to coexist (Fig. 7).

Combined image depicting a young woman and an elderly woman sitting at a table, several arms leaning on top of a large map, and a series of four stick-figure illustrations featuring people addressing environmental and situational challenges.Fig. 7: Cultural interface research with the elderly, conducted by the Royal College of Art, England in 2013

The form and utility of design communities in the 21st century is fluid, and goes from groups of like-minded designers and illustrators to communities working digitally to solve specific problems. Even short-term collectives are addressing social issues.

All are intricate groups of creative humans. They shouldn’t be viewed, in any way, as “resources” for extraction and inspiration. Too often in the Western design world, we hear that ideas have largely plateaued and become homogenous, but that ignores the amazing work flourishing in other nations and pockets of the internet. How you build connections among other creative people makes you part of the network. See them, however ephemeral and globally distributed, as a powerful way to expand your design horizons and be part of something different.

 

Favicon for A List Apart: The Full Feed 15:00 Liminal Thinking » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share an excerpt from Practice 4 of Dave Gray's new book, Liminal Thinking, available now from Two Waves Books. Use code ALA-LT for 20% off!

A theory that explains everything, explains nothing
Karl Popper

Here’s a story I heard from a friend of mine named Adrian Howard. His team was working on a software project, and they were working so hard that they were burning themselves out. They were working late nights, and they agreed as a team to slow down their pace. “We’re going to work 9 to 5, and we’re going to get as much done as we can, but we’re not going to stay late. We’re not going to work late at night. We’re going to pace ourselves. Slow and steady wins the race.”

Well, there was one guy on the team who just didn’t do that. He was staying late at night, and Adrian was getting quite frustrated by that. Adrian had a theory about what was going on. What seemed obvious to him was that this guy was being macho, trying to prove himself, trying to outdo all the other coders, and showing them that he was a tough guy. Everything that Adrian could observe about this guy confirmed that belief.

Late one night, Adrian was so frustrated that he went over and confronted the guy about the issue. He expected a confrontation, but to his surprise, the guy broke down in tears. Adrian discovered that this guy was not working late because he was trying to prove something, but because home wasn’t a safe place for him. They were able to achieve a breakthrough, but it was only possible because Adrian went up and talked to him. Without that conversation, there wouldn’t have been a breakthrough.

It’s easy to make up theories about why people do what they do, but those theories are often wrong, even when they can consistently and reliably predict what someone will do.

For example, think about your horoscope. Horoscopes make predictions all the time:

  • “Prepare yourself for a learning experience about leaping to conclusions.”
  • “You may find the atmosphere today a bit oppressive.”
  • “Today, what seems like an innocent conversation will hold an entirely different connotation for one of the other people involved.”
  • “Stand up to the people who usually intimidate you. Today, they will be no match for you.”

These predictions are so vague that you can read anything you want into them. They are practically self-fulfilling prophecies: if you believe them, they are almost guaranteed to come true, because you will set your expectations and act in ways that make them come true. And in any case, they can never be disproven.

So what makes a good theory, anyway?

A scientist and philosopher named Karl Popper spent a lot of time thinking about this. Here’s the test he came up with, and I think it’s a good one: Does the theory make a prediction that might not come true? That is, can it be proven false?

What makes this a good test? Popper noted that it’s relatively easy to develop a theory that offers predictions—like a horoscope—that can never be disproven.

The test of a good theory, he said, is not that it can’t be disproven, but that it can be disproven.

For example, if I have a theory that you are now surrounded by invisible, undetectable, flying elephants, well, there’s no way you can prove me wrong. But if my theory can be subjected to some kind of test—if it is possible that it could be disproved, then the theory can be tested.

He called this trait falsifiability: the possibility that a theory could be proven false.

Many theories people have about other people are like horoscopes. They are not falsifiable theories, but self-fulfilling prophecies that can never be disproven.

Just because you can predict someone’s behavior does not validate your theories about them, any more than a horoscope prediction “coming true” means it was a valid prediction. If you want to understand what’s going on inside someone else’s head, sometimes you need to have a conversation with them.

Many years after the Vietnam War, former U.S. Secretary of State Robert McNamara met with Nguyen Co Thach, former Foreign Minister of Vietnam, who had fought for the Viet Cong in the war. McNamara had formed the hypothesis that the war could have been avoided, that Vietnam and the United States could have both achieved their objectives without the terrible loss of life. When he presented his thinking to Thach, Thach said, “You’re totally wrong. We were fighting for our independence. You were fighting to enslave us.”

“But what did you accomplish?” asked McNamara. “You didn’t get any more than we were willing to give you at the beginning of the war. You could have had the whole damn thing: independence, unification.”

“Mr. McNamara,” answered Thach. “You must have never read a history book. If you had, you’d know that we weren’t pawns of the Chinese or the Russians. Don’t you understand that we have been fighting the Chinese for a thousand years? We were fighting for our independence. And we would fight to the last man. And we were determined to do so. And no amount of bombing, no amount of U.S. pressure would ever have stopped us.”

McNamara then realized that the entire war had been based on a complete misunderstanding. He said: “In the case of Vietnam, we didn’t know them well enough to empathize. And there was total misunderstanding as a result. They believed that we had simply replaced the French as a colonial power, and we were seeking to subject South and North Vietnam to our colonial interests, which was absolutely absurd. And we saw Vietnam as an element of the Cold War. Not what they saw it as: a civil war.”

Sometimes people come into conflict not because they disagree, but because they fundamentally misunderstand each other. This can happen when people are viewing a situation from completely different points of view.

Have you ever had someone that you worked with, where you thought, this person is insane; they make no sense; they are crazy; they’re just nuts?

Everyone knows someone like that, right?

Sometimes people really do have mental disorders, including problems that can create danger for themselves and others. If that’s the case, it might make sense to stay away from them, or to seek help from a mental health professional.

But far more often, saying another person is crazy is just a way to create internal coherence within your belief bubble. Your “obvious” is stopping you from seeing clearly. The “crazy person” may be acting based on beliefs that are inconceivable to you because they are outside your bubble.

If you think to yourself, this person is just nuts, and nothing can be done about it, it can’t be changed, then it’s possible that your theory about that person is constrained by a limiting belief.

Most people don’t test their theories about other people, because it’s a potential bubble-buster: if you give your self-sealing logic bubble a true test, then it just might collapse on you.

People do fake tests all the time, of course.

Here’s an easy way to do a fake test of your beliefs. Just search the Internet. No matter what your belief is, you’ll find plenty of articles that support and reinforce your bubble. The Internet is like a grocery store for facts. It’s easier than ever to find “facts” that support pretty much any belief.

Fake tests will help if your goal is to feel better about yourself and reinforce your bubble. But if you want to figure out what is really going on, a fake test will not help.

What will help is triangulation: the practice of developing multiple viewpoints and theories that you can compare, contrast, combine, and validate, to get a better understanding of what’s going on.

U.S. military strategist Roy Adams told me this story about an “aha” moment he had in Iraq.

He was having a beer with a friend who was in the Special Forces. Usually, they didn’t talk about work, but he happened to have a map with him. At the time, Adams and his team were designing their plans based on the political boundaries of the map, so on the map were districts, as well as the people who were in charge of the districts.

His friend said, “You know, this is really interesting.” And he picked up a pen and said, “Let me draw the tribal boundaries on this map for you.” The boundaries were completely different but overlapping. Suddenly, Adams had two different versions of reality on his map.

The political map was primarily a Shia map, and the tribal map had both Sunni and Shia. Only by overlaying the two maps did Adams start to understand the situation. Neither map would have made sense by itself.

By laying these maps over each other, suddenly things started to click. Now he understood why they were having success in some places and meeting resistance in others. Everything started to make more sense.

The insights in this case came not from one map or another, but through overlaying them. This is the practice of triangulation. Each map represented one theory of the world, one version of reality. It was only by viewing the situation through multiple perspectives—multiple theories—that he was able to gain insight and see the situation differently. (Fig. 1)

Illustration of two people holding rectangles and identifying where they overlapFig 1: Look for alternatives.

My friend Adrian Howard told me about a similar experience he had when working at a large Telecom company that had grown by acquiring other companies over many years. His team found itself running up against resistance and pushback that seemed odd and inexplicable. Then someone on the team took some markers and color-coded the boxes on the org chart based on which companies the people in each box had originally come from—many of whom used to be fierce competitors—and suddenly the reasons for the resistance became clear and understandable.

For any one observation there may be a vast number of possible explanations. Many of them may be based on beliefs that are outside of your current belief bubble, in which case, they may seem strange, absurd, crazy, or just plain wrong.

Most of the time we are all walking around with our heads so full of “obvious” that we can’t see what’s really going on. If you think something is obvious, that’s an idea that bears closer examination. Why do you think it’s obvious? What personal experiences have you had that led to that belief? Can you imagine a different set of experiences that might lead to a different belief?

Cultivate as many theories as you can—including some that seem odd, counter-intuitive, or even mutually contradictory—and hold onto them loosely. Don’t get too attached to any one of them. (Fig. 2)

An illustration of a person holding the strings of three large balloonsFig 2: Hold your theories loosely.

Then you can start asking questions and seeking valid information to help you understand what’s really going on. The way to seek understanding is to empty your cup, step up and give people your full attention, suspend your beliefs and judgments, and listen carefully.

The thing to remember is that people act in ways that make sense to them. If something doesn’t make sense to you, then you’re missing something.

What are you missing? If someone says something that seems odd or unbelievable, ask yourself, “What would I need to believe for that to be true?”

In many cases, the only way you’re ever going to understand what’s inside someone else’s head is by talking to them. Sometimes that idea might seem scary. It may be that you will hear something that threatens your bubble of belief. But if you can get over your fear, go and talk to the dragon, or take the ogre out for coffee. You just may learn something that will change your life.

Practice exercises

Triangulate and validate. Look at situations from as many points of view as possible. Consider the possibility that seemingly different or contradictory beliefs may be valid. If something doesn’t make sense to you, then you’re missing something.

Exercise #1

Think about a co-worker or family member, someone you care about, or can’t walk away from for whatever reason, that you have trouble getting along with. Consider their beliefs and behavior, and come up with as many theories as you can to explain why they act the way they do. Then see if you can have a conversation with that person to explore what’s really going on.

Exercise #2

Think of a situation at home or work that you find problematic. Try to come up with as many perspectives as you can that might give you a different way to look at the situation. What is your current theory? What is its opposite? How many perspectives or points of view can you think of that might help you see that situation through different eyes?

Want to read more?

Get 20% off your copy of Liminal Thinking and other titles from Two Waves Books—an imprint of Rosenfeld Media—with code ALA-LT.

Cover of Liminal Thinking

News stories from Monday 24 October, 2016

Favicon for A List Apart: The Full Feed 05:01 This week's sponsor: INDEED PRIME » Post from A List Apart: The Full Feed Visit off-site link

INDEED PRIME, the job search platform for top tech talent. Apply to 100 top tech companies with 1 simple application.

News stories from Tuesday 18 October, 2016

Favicon for A List Apart: The Full Feed 15:00 JavaScript for Web Designers: DOM Scripting » Post from A List Apart: The Full Feed Visit off-site link

A note from the editors: We’re pleased to share an excerpt from Chapter 5 of Mat Marquis' new book, JavaScript for Web Designers, available now from A Book Apart.

Before we do anything with a page, you and I need to have a talk about something very important: the Document Object Model. There are two purposes to the DOM: providing JavaScript with a map of all the elements on our page, and providing us with a set of methods for accessing those elements, their attributes, and their contents.

The “object” part of Document Object Model should make a lot more sense now than it did the first time the DOM came up, though: the DOM is a representation of a web page in the form of an object, made up of properties that represent each of the document’s child elements and subproperties representing each of those elements’ child elements, and so on. It’s objects all the way down.

window: The Global Context

Everything we do with JavaScript falls within the scope of a single object: window. The window object represents, predictably enough, the entire browser window. It contains the entire DOM, as well as—and this is the tricky part—the whole of JavaScript.

When we first talked about variable scope, we touched on the concept of there being “global” and “local” scopes, meaning that a variable could be made available either to every part of our scripts or to their enclosing function alone.

The window object is that global scope. All of the functions and methods built into JavaScript are built off of the window object. We don’t have to reference window constantly, of course, or you would’ve seen a lot of it before now—since window is the global scope, JavaScript checks window for any variables we haven’t defined ourselves. In fact, the console object that you’ve hopefully come to know and love is a method of the window object:

window.console.log
function log() { [native code] }

It’s hard to visualize globally vs. locally scoped variables before knowing about window, but much easier after: when we introduce a variable to the global scope, we’re making it a property of window—and since we don’t explicitly have to reference window whenever we’re accessing one of its properties or methods, we can call that variable anywhere in our scripts by just using its identifier. When we access an identifier, what we’re really doing is this:

function ourFunction() {
    var localVar = "I’m local.";
    globalVar = "I’m global.";

    return "I’m global too!";
};
undefined

window.ourFunction();
I’m global too!

window.localVar;
undefined

window.globalVar;
I’m global.

The DOM’s entire representation of the page is a property of window: specifically, window.document. Just entering window.document in your developer console will return all of the markup on the current page in one enormous string, which isn’t particularly useful—but everything on the page can be accessed as subproperties of window.document the exact same way. Remember that we don’t need to specify window in order to access its document property—window is the only game in town, after all.

document.head
<head>...<&sol;head>

document.body
<body>...<&sol;body>

Those two properties are themselves objects that contain properties that are objects, and so on down the chain. (“Everything is an object, kinda.”)

Using the DOM

The objects in window.document make up JavaScript’s map of the document, but it isn’t terribly useful for us—at least, not when we’re trying to access DOM nodes the way we’d access any other object. Winding our way through the document object manually would be a huge headache for us, and that means our scripts would completely fall apart as soon as any markup changed.

But window.document isn’t just a representation of the page; it also provides us with a smarter API for accessing that information. For instance, if we want to find every p element on a page, we don’t have to write out a string of property keys—we use a helper method built into document that gathers them all into an array-like list for us. Open up any site you want—so long as it likely has a paragraph element or two in it—and try this out in your console:

document.getElementsByTagName( "p" );
[<p>...<&sol;p>, <p>...<&sol;p>, <p>...<&sol;p>, <p>...<&sol;p>]

Since we’re dealing with such familiar data types, we already have some idea how to work with them:

var paragraphs = document.getElementsByTagName( "p" );
undefined

paragraphs.length
4

paragraphs[ 0 ];
<p>...<&sol;p>

But DOM methods don’t give us arrays, strictly speaking. Methods like getElementsByTagName return “node lists,” which behave a lot like arrays. Each item in a nodeList refers to an individual node in the DOM—like a p or a div—and will come with a number of DOM-specific methods built in. For example, the innerHTML method will return any markup a node contains—elements, text, and so on—as a string:

var paragraphs = document.getElementsByTagName( "p" ),
    lastIndex = paragraphs.length – 1, /* Use the length of the `paragraphs` node list minus 1 (because of zero-indexing) to get the last paragraph on the page */
    lastParagraph = paragraphs[ lastIndex ]; 

lastParagraph.innerHTML;
And that’s how I spent my summer vacation.
Fig 5.1: First drafts are always tough.

Fig 5.1: First drafts are always tough.

The same way these methods give us access to information on the rendered page, they allow us to alter that information, as well. For example, the innerHTML method does this the same way we’d change the value of any other object: a single equals sign, followed by the new value.

var paragraphs = document.getElementsByTagName( "p" ),
    firstParagraph = paragraphs[ 0 ];

firstParagraph.innerHTML = "Listen up, chumps:";
"Listen up, chumps:"

JavaScript’s map of the DOM works both ways: document is updated whenever any markup changes, and our markup is updated whenever anything within document changes (Fig 5.1).

Likewise, the DOM API gives us a number of methods for creating, adding, and removing elements. They’re all more or less spelled out in plain English, so even though things can seem a little verbose, it isn’t too hard to break down.

DOM Scripting

Before we get started, let’s abandon our developer console for a bit. Ages ago now, we walked through setting up a bare-bones HTML template that pulls in a remote script, and we’re going to revisit that setup now. Between the knowledge you’ve gained about JavaScript so far and an introduction to the DOM, we’re done with just telling our console to parrot things back to us—it’s time to build something.

We’re going to add a “cut” to an index page full of text—a teaser paragraph followed by a link to reveal the full text. We’re not going to make the user navigate to another page, though. Instead, we’ll use JavaScript to show the full text on the same page.

Let’s start by setting up an HTML document that links out to an external stylesheet and external script file—nothing fancy. Both our stylesheet and script files are empty with .css and .js extensions, for now—I like to keep my CSS in a /css subdirectory and my JavaScript in a /js subdirectory, but do whatever makes you most comfortable.

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <link rel="stylesheet" type="text/css" href="css/style.css">
    </head>
    <body>

        <script src="js/script.js"></script>
    </body>
</html>

We’re going to populate that page with several paragraphs of text. Any ol’ text you can find laying around will do, including—with apologies to the content strategists in the audience—a little old-fashioned lorem ipsum. We’re just mocking up a quick article page, like a blog post.

<!DOCTYPE html>
<html>
    <head>
        <meta charset="utf-8">
        <link rel="stylesheet" type="text/css" href="css/style.css">
    </head>
    <body>
        <h1>JavaScript for Web Designers</h1>

        <p>In all fairness, I should start this book with an apology—not to you, reader, though I don’t doubt that I’ll owe you at least one by the time we get to the end. I owe JavaScript a number of apologies for the things I’ve said to it during the early years of my career, some of which were strong enough to etch glass.</p>

        <p>This is my not-so-subtle way of saying that JavaScript can be a tricky thing to learn.</p>

        [ … ]

        <script src="js/script.js"></script>
    </body>
</html>

Feel free to open up the stylesheet and play with the typography, but don’t get too distracted. We’ll need to write a little CSS later, but for now: we’ve got scripting to do.

We can break this script down into a few discrete tasks: we need to add a Read More link to the first paragraph, we need to hide all the p elements apart from the first one, and we need to reveal those hidden elements when the user interacts with the Read More link.

We’ll start by adding that Read More link to the end of the first paragraph. Open up your still-empty script.js file and enter the following:

var newLink = document.createElement( "a" );

First, we’re intializing the variable newLink, which uses document.createElement( "a" ) to—just like it says on the tin—create a new a element. This element doesn’t really exist anywhere yet—to get it to appear on the page we’ll need to add it manually. First, though, <a></a> without any attributes or contents isn’t very useful. Before adding it to the page, let’s populate it with whatever information it needs.

We could do this after adding the link to the DOM, of course, but there’s no sense in making multiple updates to the element on the page instead of one update that adds the final result—doing all the work on that element before dropping it into the page helps keep our code predictable.

Making a single trip to the DOM whenever possible is also better for performance—but performance micro-optimization is easy to obsess over. As you’ve seen, JavaScript frequently offers us multiple ways to do the same thing, and one of those methods may technically outperform the other. This invariably leads to “excessively clever” code—convoluted loops that require in-person explanations to make any sense at all, just for the sake of shaving off precious picoseconds of load time. I’ve done it; I still catch myself doing it; but you should try not to. So while making as few round-trips to the DOM as possible is a good habit to be in for the sake of performance, the main reason is that it keeps our code readable and predictable. By only making trips to the DOM when we really need to, we avoid repeating ourselves and we make our interaction points with the DOM more obvious for future maintainers of our scripts.

So. Back to our empty, attribute-less <a></a> floating in the JavaScript ether, totally independent of our document.

Now we can use two other DOM interfaces to make that link more useful: setAttribute to give it attributes, and innerHTML to populate it with text. These have a slightly different syntax. We can just assign a string using innerHTML, the way we’d assign a value to any other object. setAttribute, on the other hand, expects two arguments: the attribute and the value we want for that attribute, in that order. Since we don’t actually plan to have this link go anywhere, we’ll just set a hash as the href—a link to the page you’re already on.

var newLink = document.createElement( "a" );

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

You’ll notice we’re using these interfaces on our stored reference to the element instead of on document itself. All the DOM’s nodes have access to methods like the ones we’re using here—we only use document.getElementsByTagName( "p" ) because we want to get all the paragraph elements in the document. If we only wanted to get all the paragraph elements inside a certain div, we could do the same thing with a reference to that div—something like ourSpecificDiv.getElementsByTagName( "p" );. And since we’ll want to set the href attribute and the inner HTML of the link we’ve created, we reference these properties using newLink.setAttribute and newLink.innerHTML.

Next: we want this link to come at the end of our first paragraph, so our script will need a way to reference that first paragraph. We already know that document.getElementsByTagName( "p" ) gives us a node list of all the paragraphs in the page. Since node lists behave like arrays, we can reference the first item in the node list one by using the index 0.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

For the sake of keeping our code readable, it’s a good idea to initialize our variables up at the top of a script—even if only by initializing them as undefined (by giving them an identifier but no value)—if we plan to assign them a value later on. This way we know all the identifiers in play.

So now we have everything we need in order to append a link to the end of the first paragraph: the element that we want to append (newLink) and the element we want to append it to (firstParagraph).

One of the built-in methods on all DOM nodes is appendChild, which—as the name implies—allows us to append a child element to that DOM node. We’ll call that appendChild method on our saved reference to the first paragraph in the document, passing it newLink as an argument.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";

firstParagraph.appendChild( newLink );

Now—finally—we have something we can point at when we reload the page. If everything has gone according to plan, you’ll now have a Read More link at the end of the first paragraph on the page. If everything hasn’t gone according to plan—because of a misplaced semicolon or mismatched parentheses, for example—your developer console will give you a heads-up that something has gone wrong, so be sure to keep it open.

Pretty close, but a little janky-looking—our link is crashing into the paragraph above it, since that link is display: inline by default (Fig 5.2).

Well, it’s a start.

Fig 5.2: Well, it’s a start.

We have a couple of options for dealing with this: I won’t get into all the various syntaxes here, but the DOM also gives us access to styling information about elements—though, in its most basic form, it will only allow us to read and change styling information associated with a style attribute. Just to get a feel for how that works, let’s change the link to display: inline-block and add a few pixels of margin to the left side, so it isn’t colliding with our text. Just like setting attributes, we’ll do this before we add the link to the page:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.innerHTML = "Read more";
newLink.style.display = "inline-block";
newLink.style.marginLeft = "10px";

firstParagraph.appendChild( newLink );

Well, adding those lines worked, but not without a couple of catches. First, let’s talk about that syntax (Fig 5.3).

Now we’re talking.

Fig 5.3: Now we’re talking.

Remember that identifiers can’t contain hyphens, and since everything is an object (sort of), the DOM references styles in object format as well. Any CSS property that contains a hyphen instead gets camel-cased: margin-left becomes marginLeft, border-radius-top-left becomes borderRadiusTopLeft, and so on. Since the value we set for those properties is a string, however, hyphens are just fine. A little awkward and one more thing to remember, but this is manageable enough—certainly no reason to avoid styling in JavaScript, if the situation makes it absolutely necessary.

A better reason to avoid styling in JavaScript is to maintain a separation of behavior and presentation. JavaScript is our “behavioral” layer the way CSS is our “presentational” layer, and seldom the twain should meet. Changing styles on a page shouldn’t mean rooting through line after line of functions and variables, the same way we wouldn’t want to bury styles in our markup. The people who might end up maintaining the styles for the site may not be completely comfortable editing JavaScript—and since changing styles in JavaScript means we’re indirectly adding styles via style attributes, whatever we write in a script is going to override the contents of a stylesheet by default.

We can maintain that separation of concerns by instead using setAttribute again to give our link a class. So, let’s scratch out those two styling lines and add one setting a class in their place.

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

firstParagraph.appendChild( newLink );

Now we can style .more-link in our stylesheets as usual:

.more-link {
    display: inline-block;
    margin-left: 10px;
}

Much better (Fig 5.4). It’s worth keeping in mind for the future that using setAttribute this way on a node in the DOM would mean overwriting any classes already on the element, but that’s not a concern where we’re putting this element together from scratch.

No visible changes, but this change keeps our styling decisions in our CSS and our behavioral decisions in JavaScript.

Fig 5.4: No visible changes, but this change keeps our styling decisions in our CSS and our behavioral decisions in JavaScript.

Now we’re ready to move on to the second item on our to-do list: hiding all the other paragraphs.

Since we’ve made changes to code we know already worked, be sure to reload the page to make sure everything is still working as expected. We don’t want to introduce a bug here and continue on writing code, or we’ll eventually get stuck digging back through all the changes we made. If everything has gone according to plan, the page should look the same when we reload it now.

Now we have a list of all the paragraphs on the page, and we need to act on each of them. We need a loop—and since we’re iterating over an array-like node list, we need a for loop. Just to make sure we have our loop in order, we’ll log each paragraph to the console before we go any further:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {
    console.log( allParagraphs[ i ] );
}

firstParagraph.appendChild( newLink );

Your Read More link should still be kicking around in the first paragraph as usual, and your console should be rich with filler text (Fig 5.5).

Fig 5.5: Looks like our loop is doing what we expect.

Fig 5.5: Looks like our loop is doing what we expect.

Now we have to hide those paragraphs with display: none, and we have a couple of options: we could use a class the way we did before, but it wouldn’t be a terrible idea to use styles in JavaScript for this. We’re controlling all the hiding and showing from our script, and there’s no chance we’ll want that behavior to be overridden by something in a stylesheet. In this case, it makes sense to use the DOM’s built-in methods for applying styles:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {
    allParagraphs[ i ].style.display = "none";
}

firstParagraph.appendChild( newLink );

If we reload the page now, everything is gone: our JavaScript loops through the entire list of paragraphs and hides them all. We need to make an exception for the first paragraph, and that means conditional logic—an if statement, and the i variable gives us an easy value to check against:

var newLink = document.createElement( "a" );
var allParagraphs = document.getElementsByTagName( "p" );
var firstParagraph = allParagraphs[ 0 ];

newLink.setAttribute( "href", "#" );
newLink.setAttribute( "class", "more-link" );
newLink.innerHTML = "Read more";

for( var i = 0; i < allParagraphs.length; i++ ) {

    if( i === 0 ) {
            continue;
    }
    allParagraphs[ i ].style.display = "none";
}

firstParagraph.appendChild( newLink );

If this is the first time through of the loop, the continue keyword skips the rest of the current iteration and then—unlike if we’d used break—the loop continues on to the next iteration.

If you reload the page now, we’ll have a single paragraph with a Read More link at the end, but all the others will be hidden. Things are looking good so far—and if things aren’t looking quite so good for you, double-check your console to make sure nothing is amiss.

Now that you’ve got a solid grounding in the DOM, let’s really dig in and see where to take it from here.

Want to read more?

The rest of this chapter (even more than you just read!) goes even deeper—and that’s only one chapter out of Mat’s hands-on, help-you-with-your-current-project guide. Check out the rest of JavaScript for Web Designers at A Book Apart.

News stories from Monday 17 October, 2016

Favicon for heise Security 14:51 Identitätsdiebstahl: Banking-Trojaner Acecard will Selfies von Opfern knipsen » Post from heise Security Visit off-site link
Identitätsdiebstahl: Auf ein Selfie mit dem Banking-Trojaner Acecard

Ein Android-Schädling bittet seine Opfer inklusive Personalausweis vor die Kamera.

Favicon for heise Security 12:55 Github hat Liste mit kompromittierten Online-Shops gelöscht » Post from heise Security Visit off-site link
Speicherchip auf Kreditkarte

Der Online-Dienst entfernte kommentarlos die Liste eines Sicherheitsforschers mit URLs zu Online-Shops mit Skimming-Malware. Auch Gitlab löschte die Auflistung, gestand kurz darauf aber einen Fehler ein.

Favicon for heise Security 11:41 Sicherheitsmesse it-sa 2016: Von Ransomware bis SCADA » Post from heise Security Visit off-site link
Sicherheitsmesse it-sa 2016: Von Ransomware bis SCADA

Vom 18. bis zum 20. Oktober präsentieren im Messezentrum Nürnberg mehr als 470 Aussteller ihre Produkte und Dienstleistungen aus Bereichen wie Cloud Computing, IT Forensik, Datensicherung oder Hosting. Auch die c't-Krypto-Kampagne ist vor Ort.

Favicon for heise Security 10:19 Verschlüsselte Kommunikation: Erstes Code-Audit der pEp-Engine veröffentlicht » Post from heise Security Visit off-site link
pretty easy privacy

Die Schweizer pEp-Stiftung hat das Code-Audit der pEp-Engine durch die Kölner Firma Sektioneins veröffentlicht. Sektioneins entdeckte einige Fehler und wurde beauftragt, bei jedem relevanten Update den Code neu zu prüfen.

News stories from Sunday 16 October, 2016

Favicon for heise Security 12:42 HTTPS-Verschlüsselung im Web erreicht erstmals 50 Prozent » Post from heise Security Visit off-site link
https symbolbild

Rund die Hälfte der Webseiten wird mittlerweile per HTTPS verschlüsselt zum Nutzer übertragen. Das ergeben Zahlen von Google und Mozilla.

News stories from Saturday 15 October, 2016

Favicon for heise Security 13:07 Offene Datenbank: 58 Millionen Datensätze im Umlauf » Post from heise Security Visit off-site link
Gefahren aus dem Netz

Durch eine ungeschützte MongoDB-Datenbank des texanischen Dienstleisters Modern Business Solutions sind mindestens 58 Millionen Einträge aus der Automobilbranche und Personalvermittlung geleakt.

Favicon for heise Security 11:43 DDoS-Tool Mirai versklavt Gateways von Sierra Wireless fürs IoT-Botnet » Post from heise Security Visit off-site link
DDoS-Tool Mirai versklavt Gateways von Sierra Wireless fürs IoT-Botnet

Die nächsten IoT-Gerätchen werden von Botnets vereinnahmt: Die Modems von Sierra Wireless werden aber nicht über eine Sicherheitslücke übernommen, sondern über unverändert gelassene Standard-Passwörter.

Favicon for the web hates me 08:00 Lean Testing mit www.leankoala.com » Post from the web hates me Visit off-site link

Lange Zeit war es ruhig hier im Blog. Das hatte aber einen Grund. Und wie ich finde einen sehr guten. Nach ungefähr einem Jahr Arbeit bin ich stolz unsere „Software as s Service“-Lösung www.leankoala.com präsentieren zu können und um nun auch offiziell die offene Beta-Phase einzuläuten. Sehr viel Arbeit, sehr viel Stolz. Aber um was geht es eigentlich […]

The post Lean Testing mit www.leankoala.com appeared first on the web hates me.

News stories from Friday 14 October, 2016

Favicon for heise Security 17:55 Kryptogeld-Projekt Ethereum: Der nächste Hard Fork kommt » Post from heise Security Visit off-site link
Kryptogeld-Projekt Ethereum: Der nächste Hard Fork kommt

Erneut steht der Kryptowährung ein harter Fork bevor. Der soll zum Schutz vor DOS-Attacken dienen, die seit rund drei Wochen das Ethereum-Netzwerk verlangsamen.

Favicon for heise Security 15:56 "Tutuapp": Chinesischer App Store mit Raubkopien verbreitet sich » Post from heise Security Visit off-site link
Chinesischer App Store mit App-Raubkopien verbreitet sich schnell

Um an eine gehackte Version von Pokemon Go heranzukommen, installieren anscheinend immer mehr Jugendliche den dubiosen "Tutuapp"-Store auf ihren iPhones und Android-Smartphones. Der Weg für Malware ist dann frei.

Favicon for heise Security 14:43 GlobalSign annulliert versehentlich Zertifikate von vielen Webseiten » Post from heise Security Visit off-site link
GlobalSign annulliert versehentlich Zertifikate von vielen Webseiten

Aktuell warnen einige Webbrowser davor, dass Verbindungen zu Webseiten wie etwa Wikipedia nicht mehr gesichert sind, da mit dem Zertifikat der Seite etwas nicht stimmt.

Favicon for heise Security 13:33 Von der Leyen benennt Chef ihrer neuen Cyber-Truppe » Post from heise Security Visit off-site link
Von der Leyen benennt Chef ihrer neuen Cyber-Truppe

13.500 Soldaten und Zivilisten soll die neue Cyber-Truppe der Bundeswehr umfassen, zu deren Chef nun der Generalmajor Ludwig Leinhos ernannt wurde. Zuvor leitete er den Aufbaustab der Truppe.

Favicon for heise Security 11:57 SSHowDowN: Zwölf Jahre alter OpenSSH-Bug gefährdet unzählige IoT-Geräte » Post from heise Security Visit off-site link
SSHowDowN: Zwölf Jahre alter OpenSSH-Bug soll unzählige IoT-Geräte gefährden

Akamai warnt davor, dass Kriminelle unvermindert Millionen IoT-Geräte für DDoS-Attacken missbrauchen. Die dafür ausgenutzte Schwachstelle ist älter als ein Jahrzehnt. Viele Geräte sollen sich nicht patchen lassen.

Favicon for heise Security 09:53 Bilanz: Facebook hat Sicherheitsforschern bisher 5 Millionen US-Dollar gezahlt » Post from heise Security Visit off-site link
Facebook Bug Bounty

Vor fünf Jahren hat Facebook sein Bug-Bounty-Programm gestartet und seitdem tausenden von Sicherheitsforschern Prämien gezahlt. Das Programm umspannt immer mehr Produkte des Unternehmens.

Favicon for heise Security 09:12 Magento-Updates: Checkout-Prozess als Einfallstor für Angreifer » Post from heise Security Visit off-site link
Magento-Updates: Checkout-Prozess als Einfallstor für Angreifer

Sicherheits-Patches für das Shop-System schließen mehrere Lücken. Zwei davon gelten als kritisch.

News stories from Thursday 13 October, 2016

Favicon for heise Security 17:04 Gezinkte Primzahlen ermöglichen Hintertüren in Verschlüsselung » Post from heise Security Visit off-site link
Gezinkte Primzahlen ermöglichen Hintertüren in Verschlüsselung

Ein Forscherteam hat aufgezeigt, dass man durch geschickte Konstruktion einer Primzahl eine Hintertür in Verschlüsselungsverfahren einbauen kann. Nicht auszuschließen, dass dies bei etablierten Verfahren längst passiert ist.

News stories from Friday 07 October, 2016

Favicon for Kopozky 16:27 Pry Hard » Post from Kopozky Visit off-site link

Comic strip: “Pry Hard”

Starring: Mr Kopozky and some client


News stories from Tuesday 04 October, 2016

Favicon for the web hates me 08:00 Aller guten Dinge sind drei – Code Talks 2016 » Post from the web hates me Visit off-site link

Nach einiger Zeit des „Nicht-Sprechens“ habe ich mich dazu entschieden mal wieder einen Vortrag auf einer Konferenz einzureichen. Mit Erfolg. Die war dann auch schon letzte Woche und es war wie immer ein Vergnügen, die Code Talks in Hamburg zu rocken. Torsten (@toddyfranz) und ich wollten ein wenig erzählen, wie wir damals bei Gruner+Jahr das […]

The post Aller guten Dinge sind drei – Code Talks 2016 appeared first on the web hates me.

News stories from Monday 05 September, 2016

Favicon for Kopozky 14:47 The Book – 10 Years Jubilee » Post from Kopozky Visit off-site link

Photo: “Kopozky – The Book”

“Kopozky – The Book”: now available at kopozky-shop.net


News stories from Sunday 07 August, 2016

Favicon for Kopozky 17:59 A Paragon » Post from Kopozky Visit off-site link

Comic strip: “A Paragon”

Starring: Mr Kopozky and The Copywriter


News stories from Tuesday 31 May, 2016

Favicon for Joel on Software 01:14 Introducing HyperDev » Post from Joel on Software Visit off-site link

One more thing…

It’s been awhile since we launched a whole new product at Fog Creek Software (the last one was Trello, and that’s doing pretty well). Today we’re announcing the public beta of HyperDev, a developer playground for building full-stack web-apps fast.

HyperDev is going to be the fastest way to bang out code and get it running on the internet. We want to eliminate 100% of the complicated administrative details around getting code up and running on a website. The best way to explain that is with a little tour.

Step one. You go to hyperdev.com.

Boom. Your new website is already running. You have your own private virtual machine (well, really it’s a container but you don’t have to care about that or know what that means) running on the internet at its own, custom URL which you can already give people and they can already go to it and see the simple code we started you out with.

All that happened just because you went to hyperdev.com.

Notice what you DIDN’T do.

  • You didn’t make an account.
  • You didn’t use Git. Or any version control, really.
  • You didn’t deal with name servers.
  • You didn’t sign up with a hosting provider.
  • You didn’t provision a server.
  • You didn’t install an operating system or a LAMP stack or Node or operating systems or anything.
  • You didn’t configure the server.
  • You didn’t figure out how to integrate and deploy your code.

You just went to hyperdev.com. Try it now!

What do you see in your browser?

Well, you’re seeing a basic IDE. There’s a little button that says SHOW and when you click on that, another browser window opens up showing you your website as it appears to the world. Notice that we invented a unique name for you.

Over there in the IDE, in the bottom left, you see some client side files. One of them is called index.html. You know what to do, right? Click on index.html and make a couple of changes to the text.

Now here’s something that is already a little bit magic… As you type changes into the IDE, without saving, those changes are deploying to your new web server and we’re refreshing the web browser for you, so those changes are appearing almost instantly, both in your browser and for anyone else on the internet visiting your URL.

Again, notice what you DIDN’T do:

  • You didn’t hit a “save” button.
  • You didn’t commit to Git.
  • You didn’t push.
  • You didn’t run a deployment script.
  • You didn’t restart the web server.
  • You didn’t refresh the page on your web browser.

You just typed some changes and BOOM they appeared.

OK, so far so good. That’s a little bit like jsFiddle or Stack Overflow snippets, right? NBD.

But let’s look around the IDE some more. In the top left, you see some server side files. These are actual code that actually runs on the actual (virtual) server that we’re running for you. It’s running node. If you go into the server.js file you see a bunch of JavaScript. Now change something there, and watch your window over on the right.

Magic again… the changes you are making to the server-side Javascript code are already deployed and they’re already showing up live in the web browser you’re pointing at your URL.

Literally every change you make is instantly saved, uploaded to the server, the server is restarted with the new code, and your browser is refreshed, all within half a second. So now your server-side code changes are instantly deployed, and once again, notice that you didn’t:

  • Save
  • Do Git incantations
  • Deploy
  • Buy and configure a continuous integration solution
  • Restart anything
  • Send any SIGHUPs

You just changed the code and it was already reflected on the live server.

Now you’re starting to get the idea of HyperDev. It’s just a SUPER FAST way to get running code up on the internet without dealing with any administrative headaches that are not related to your code.

Ok, now I think I know the next question you’re going to ask me.

“Wait a minute,” you’re going to ask. “If I’m not using Git, is this a single-developer solution?”

No. There’s an Invite button in the top left. You can use that to get a link that you give your friends. When they go to that link, they’ll be editing, live, with you, in the same documents. It’s a magical kind of team programming where everything shows up instantly, like Trello, or Google Docs. It is a magical thing to collaborate with a team of two or three or four people banging away on different parts of the code at the same time without a source control system. It’s remarkably productive; you can dive in and help each other or you can each work on different parts of the code.

“This doesn’t make sense. How is the code not permanently broken? You can’t just sync all our changes continuously!”

You’d be surprised just how well it does work, for most small teams and most simple programming projects. Listen, this is not the future of all software development. Professional software development teams will continue to use professional, robust tools like Git and that’s great. But it’s surprising how just having continuous merging and reliable Undo solves the “version control” problem for all kinds of simple coding problems. And it really does create an insanely addictive form of collaboration that supercharges your team productivity.

“What if I literally type ‘DELETE * FROM USERS’ on my way to typing ‘WHERE id=9283’, do I lose all my user data?”

Erm… yes. Don’t do that. This doesn’t come up that often, to be honest, and we’re going to add the world’s simplest “branch” feature so that optionally you can have a “dev” and “live” branch, but for now, yeah, you’d be surprised at how well this works in practice even though in theory it sounds terrifying.

“Does it have to be JavaScript?”

Right now the server we gave you is running Node so today it has to be JavaScript. We’ll add other languages soon.

“What can I do with my server?”

Anything you can do in Node. You can add any package you want just by editing package.json. So literally any working JavaScript you want to cut and paste from Stack Overflow is going to work fine.

“Is my server always up?”

If you don’t use it for a while, we’ll put your server to sleep, but it will never take more than a few seconds to restart. But yes for all intents and purposes, you can treat it like a reasonably reliably, 24/7 web server. This is still a beta so don’t ask me how many 9’s. You can have all the 8’s you want.

“Why would I trust my website to you? What if you go out of business?”

There’s nothing special about the container we gave you; it’s a generic VM running Node. There’s nothing special about the way we told you to write code; we do not give you special frameworks or libraries that will lock you in. Download your source code and host it anywhere and you’re back in business.

“How are you going to make money off of this?”

Aaaaaah! why do you care!

But seriously, the current plan is to have a free version for public / open source code you don’t mind sharing with the world. If you want private code, much like private repos, there will eventually be paid plans, and we’ll have corporate and enterprise versions. For now it’s all just a beta so don’t worry too much about that!

“What is the point of this Joel?”

As developers we have fantastic sets of amazing tools for building, creating, managing, testing, and deploying our source code. They’re powerful and can do anything you might need. But they’re usually too complex and too complicated for very simple projects. Useful little bits of code never get written because you dread the administration of setting up a new dev environment, source code repo, and server. New programmers and students are overwhelmed by the complexity of distributed version control when they’re still learning to write a while loop. Apps that might solve real problems never get written because of the friction of getting started.

Our theory here is that HyperDev can remove all the barriers to getting started and building useful things, and more great things will get built.

“What now?”

Really? Just go to HyperDev and start playing!

News stories from Monday 23 May, 2016

Favicon for test.ical.ly 08:12 Hallo Welt! » Post from test.ical.ly Visit off-site link

Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Und dann starte mit dem Schreiben!

News stories from Tuesday 10 May, 2016

Favicon for Ramblings of a web guy 22:58 Don't say ASAP when you really mean DEADIN » Post from Ramblings of a web guy Visit off-site link
I have found that people tend to use the acronym ASAP incorrectly. ASAP stands for As Soon As Possible. The most important part of that phrase to me is As Possible. Sometimes, it's only possible to get something done 3 weeks from now due to other priorities. Or, to do it correct, it will take hours or days. However, some people don't seem to get this concept. Here are a couple of examples I found on the web.

The Problem with ASAP

What ‘ASAP’ Really Means

ASAP is toxic, avoid it As Soon As Possible

ASAP

It's not the fault of those writers. The world in general seems to be confused on this. Not everyone is confused though. I found ASAP — What It REALLY Means which does seem to get the real meaning.

At DealNews, we struggled with the ambiguity surrounding this acronym. To resolve this, we coined our own own phrase and acronym to represent what some people seem to think ASAP means.

DEADIN:
Drop
Everything
And
Do
It
Now

We use this when something needs to be done right now. It can't wait. The person being asked to DEADIN a task needs to literally drop what they are doing and do this instead. This is a much clearer term than ASAP.

With this new acronym in your quiver, you can better determine the importance of a task. Now, when someone asks you to do something ASAP, you can ask "Is next Tuesday OK?" Or you can tell them it will take 10 hours to do it right. If they are okay with those answers, they really did mean ASAP. If they are not, you can ask them if you should "Drop Everything And Do It Now". (Pro tip: It still make 10 hours to to right. Don't compromise the quality of your work.)

News stories from Monday 09 May, 2016

Favicon for Zach Holman 01:00 The New 10-Year Vesting Schedule » Post from Zach Holman Visit off-site link

While employees have been busy building things, founders and VCs have flipped the industry on its head and aggressively sought to prevent employees from making money from their stock options.

Traditionally, early employees would receive a option grant of a four year vesting schedule with a one year cliff. In other words, your stock would slowly “vest” — become available for you to purchase — over the course of four years, with the first options vesting one year after your hire date, and (usually) monthly after that.

The promise of this is to keep employees at the company for a number of years, since they don’t receive the full weight of their stock until they’ve been there four years.

Companies still hire with a four year vesting schedule, but the whole damn thing is a lie — in practice, people are going to be stuck at a company for much longer than four years if they want to retain the stock they’ve earned.

This stems from two new developments in recent years: companies are staying private longer (the average age of a recently-IPOed tech company is now 11 years old), and companies clamping down on private sales of employee stock after Facebook’s IPO. The impact is best summed up by the recent Handcuffed to Uber article, which effectively means employees can’t leave Uber without either forfeiting a fortune in unexercised stock, or paying a massive tax bill on imaginary, illiquid stock.

An industry run by people who haven’t been an employee in years

The leaders in the industry don’t really face any of the problems that employees face. They don’t even sugarcoat it: it’s pretty remarkable how plainspoken CEOs and VCs are when it comes to going public:

“I’m going to make sure it happens as late as possible,” said Kalanick to CNBC Monday. He added that he had no idea if Uber would go public in the next three to five years.

Don’t Expect an Uber IPO Any Time Soon

and:

“I’m committed to Palantir for the long term, and I’ve advised the company to remain private for as long as it can,” said Mr. Thiel, a billionaire.

Palantir and Investors Spar Over How to Cash In

This is a much harder pill to swallow for those at Palantir, who tends to pay their engineers far below market rate. All this coming from CEO Alex Karp, who attempted to make the case that companies should simultaneously pay their employees less, give them more equity, but don’t allow them to cash that equity out.

Top venture capitalists agree as well:

This is a top VC and luminary advocating for the position that people who end up wanting to make some money on the stock that they’ve worked hard to vest are disloyal. Nothing I’ve read in the last few weeks has made me more furious. We’re now in a position where the four year vesting schedule isn’t enough for these people. They want the four year vesting schedule, and then they want to control your life for the subsequent 4-8 years while they fuck around in the private market.

If you just had a kid and need some additional liquidity, you’re disloyal. If you’d like to pay off your student debt, forget it, we’re not going to incentivize you to do that. If your partner is going back to school and you have to move across the country, tough luck, please turn in your stock options on the way out. If you’ve been busting your ass on a below market-rate salary for years and now you want a bit of what you’ve worked hard to vest, fuck you, go back to work.

Mechanisms of control

There’s obvious things that can be done to help fix this: one of which is getting rid of the 90-day exercise window, which many companies have started to do.

Another is internal stock buybacks, but these are usually low-key and restrictive. Usually you’ll get capped, either on a personal level (you can’t sell back more than x% of your shares) or on a company-wide level (the maximum that this group of employees can sell is xxx,xxx shares).

Or, sometimes these buybacks are limited by tenure: either it’s only for current employees, or you need to be at a company for x years to be able to participate. That’s somewhat reasonable on the surface, but on the other hand it’s en vogue now for unicorns to staff up and add two thousand people in the last three years you’ve worked there. You might end up managing dozens or hundreds of people in the meantime and have a massive impact on the organization, but still can’t sell some stock to avoid all your eggs in one basket, since only people who have been there four years or more can sell.

Another really dicey thing I’ve heard of happening is the following timeline:

  • Company hires a bunch of people
  • Two years pass
  • Company realizes the stock compensation they’re paying these employees is an order of magnitude lower than market average
  • Company gives new grants to employees to, in effect, “make up” for the difference
  • Company grants at a new four year vesting schedule

And that, ladies and gentlemen, is how you sneak a ton of your employees into a de facto six year vesting schedule. A few companies I’ve heard this happening at will give that refresh grant at maybe 10x their initial grant (given how far below market rate their initial grant was), so the employee is effectively stuck for the whole six year ride if they want to retain what they earn. They’ll virtually all go ahead and stick it out, particularly if they weren’t told that this is a catch-up grant — hey, I must be doing really great here, look at how big this second grant is!

Founders of VC-backed companies are insulated from these problems. Once you’ve reached a certain level of success — say, a $100M valuation or unicorn status or some such milestone — it’s expected that your investors will strongly encourage you to take some money off the table between financing rounds so you don’t have to deal with the stress of running a high-growth business while trying to make ends meet.

No one’s yet explained to me, though, why that reasoning works for founders but not for the first employee.

I get wanting to retain people, but strictly using financial levers to do that feels skeezy, and besides, monetary rewards might not be what ultimately motivates people, past a certain point. If you really want to retain your good people, stop building fucking horrible company cultures. You already got your four year vest out of these tenured employees; you can’t move the levers retroactively just because you’re grumpy it’s five years later and you’re not worth a trillion dollars yet.

Public Enemy

There are some people who have been pushing for solutions to these problems.

Mark Cuban’s been pushing the SEC to make a number of changes to make going public easier, and that “it’s worth the hassle to go public”. Mark Zuckerberg’s been pushing that angle as well. And, of course, Fred Wilson had his truly lovely message to Travis Kalanick:

You can’t just say fuck you. Take the goddamn company public.

There are a lot of possible ways to address these problems: taking companies public earlier, being progressive when it comes to exercise windows, doing internal buybacks more often and more permissively, adjusting the tax laws to treat illiquid options differently, and so on. I just don’t know if anyone’s really going to fix it while the people in charge aren’t experiencing the pain.

News stories from Thursday 28 April, 2016

Favicon for Zach Holman 01:00 Evaluating Delusional Startups » Post from Zach Holman Visit off-site link

We’re proven entrepreneurs — one cofounder interned at Apple in 2015, and the other helped organize the annual Stanford wake-and-bake charity smoke-off — who are going to take a huge bite out of the $45 trillion Korean baked vegan food goods delivery market for people who live within one block of Valencia Street (but not towards Mission Street because it’s gross and off-brand), and we’re looking for seasoned rockstars to launch this rocket ship into outer space, come join us, we’re likely backed by one of the venture capitalists you possibly read about in a recent court deposition!

Okay, so they’re always not going to come at you like this. If you’re in the market for a new gig at a hot startup, it’s worthwhile to spend some time thinking about if your sneaking suspicions are correct and the company you’re interviewing with might be full of pretty delusional people.

Here’s a couple traits of delusional startups I’ve been noticing.

I’m gonna make you rich, Bud Fox

After a long afternoon of interviews, I sat down with some head-of-something-rather. Almost verbatim, as well as I can remember it, he dropped this lovely gem in the first four minutes of the conversation:

Now, certainly you’d be joining a rocket ship. And clearly the stock you’d have would make you rich. So what I want to aaaaahhHHHHHHHHHH! thhhwaapkt

The second part of whatever he was saying got swallowed up by the huge Irony Vortex From Six Months In The Future that zipped into existence right next to him, as the Rocket Ship He Was On would promptly implode half a year later.

In my experience, people who promise riches for you, a new hire, fall into two camps:

  • They’re destined to lose it all, or
  • They’re about to become mega rich, and assume the breadcrumbs that fell from the corners of their mouths will also make you mega rich, obviously

Both of those camps are fairly delusional.

Many leaders — unfortunately not all, but that’s life — that have a good chance at striking it rich tend to be pretty realistic, cautious, and optimistically humble about it. In turn, having those personality traits might also lead them to making more generous decisions down the line that would benefit you as well, so that’s also a bonus.

Lately I’ve heard something specific come up from a number of my close friends: the bonus they just received in the first six months from their new job at a large corporate gig far dwarfed the stock proceeds they made from the hot startup they had worked at for years.

People have been saying this for decades, but it’s always worth reiterating: don’t join a startup for the pay, and if someone’s trying to dangle that in front of your eyes, you can tell them to shove their rocket ship up their you-know-where.

The blame game

A company I was interviewing at borked a final interview slot with a head-of-something-such, so I rescheduled them for coffee the following week.

Sipped my tea for half an hour… no show. Hey, it sucks, but miscommunication happens so it wasn’t much to fret over.

The rescheduled phone call another week later started off with an apology that quickly turned into a shitstorm. The main production service was down he said, and therefore he could not attend our coffee, nor could he look up and send me an email about it, even though he did notice it and did briefly feel bad about it. The fucking CEO shat on my team the next day in front of the whole company which was complete bullshit because his team Had Done All The Necessary Things and really it was The CEO’s Dumb Fault The Shit Was All Broken Anyway right? Christ. In any case the position we were interviewing you for has been filled do you want to try for anything else?

So there were a lot of things to unwind here, and I truly do have stories from interviewing at this company that will last me until the end of the sixth Clinton administration, but the real toxic aspect is the:

  • Dude complaining about leadership
  • Leadership blaming specific people and teams across the whole company

Cultures that throw each other under the bus — in either direction, up or down — don’t function as well. The wheels will fall off the wagon at some point, and you’re going to end up with a shit product. You can even be one of those bonkers 120-hour work week startups, grinding hard at all hours of the day, and still be good people to each other. You’ve got to bounce back from setbacks and mistakes. Blameless cultures are better cultures.

On a related note, it’s amazing what you can sometimes get people to admit in an interview. While chatting with another startup, I informally asked what the two employees thought of one of the cofounders. Total shit was the flat response. Doesn’t do jack, and really doesn’t belong in engineering anymore. Props for their openness, I guess, and maybe it helped me dodge a bullet, but how employees talk about others behind their backs says a lot about how cohesive and supportive the company is.

We’re backed by the best VCs, we’re very highly educated, we know product, we have the best product

I don’t understand how you can love your startup’s product.

For me, the high is all about what’s happening next. Can’t wait to ship that new design. The refactoring getting worked on will be an order of magnitude more performant. The wireframes for where we’re hoping to be two years from now is dripping with dopamine.

I don’t understand people who are happy with what they’ve got today. Once you’re happy, you’re in maintenance mode, and maybe that’s fine if you’ve finished your product and are ready to coast on your fat stacks, but by that point you’re beyond building something new anyway. These startups who eagerly float by on shit they did years ago, assuming that rep will carry through any new competition… I just don’t understand that.

Stewart Butterfield has a healthy viewpoint when he talks about Slack:

Are you bringing other changes to Slack?
Oh, God, yeah. I try to instill this into the rest of the team but certainly I feel that what we have right now is just a giant piece of shit. Like, it’s just terrible and we should be humiliated that we offer this to the public.

Certainly he’s being a bit facetious here, since I don’t imagine he thinks the work his employees have done is shit — rather, a product is a process and it takes a long time to chip away the raw marble into the statue inside of it.

The other weird aspect of this that I’ve noticed is that there are some companies who truly hate their competition. I really dig competition, and I think it brings out good stuff across the board, but when it flips into Hatred Of The Enemy it just gets weird. Like c’mon, each of your apps put mustaches on pictures of fish, y’all gotta chill the fuck out, lol.

Asking people what they think about their competition can be a pretty decent measurement of whether the company twiddles the Thumbs of Delusion. If they flatly espouse hatred, that’s weird. If they take a nuanced approach and contrast differences in respective philosophies, that’s promising, because it means they’ve actually thought through what makes them different, and their product and culture likely will be stronger for it.

It also likely just means fewer dicks at the company. You can only deal with so much hatred in life before it sucks you up into a hole.

ymmv

I get that startups are supposed to be — by definition, really — delusional, in some respect. You’re building something that wasn’t there before, and it takes a lot of faith to build a nascent idea up into something big. So you need a leader to basically throw down so everyone can rally behind her.

Maybe I’m an ancient, grizzled old industry fuck now that I’m nearly 31, but I’m weary of seeing the sky-high bonkersmobiles driving around town these days. That’s part of the reason I’m cautiously optimistic about this bubble that will certainly almost certainly okay maybe it’ll pop again soon — it’ll get people a little more realistic about their goals again.

I still think startups are great and can change the world and all that bullshit… I just think it’s worthwhile to stop and think hard about what your potential company is promising you. Catching these things early on in the process can help save you a ton of pain down the road.

And if you’re hearing these things at your current company, well, good luck! You’re assuredly already on a rocket ship, surely, so congrats!

News stories from Friday 01 April, 2016

Favicon for Grumpy Gamer 14:45 Hey, guess what day it is... » Post from Grumpy Gamer Visit off-site link

That rights, it's the day the entire Internet magically think it's funny.

Pro-tip: You're not.

As Grumpy Gamer has been for going on twelve years, we're 100% April Fools' Day joke free.

I realize that's kind of ironic to say, since this blog is pretty-much everything free these days as I'm spending all my time blogging about Thimbleweed Park, the new point & click adventure game I'm working on.

And no, that is not a joke, check it out.

News stories from Wednesday 16 March, 2016

Favicon for Zach Holman 01:00 Firing People » Post from Zach Holman Visit off-site link

So it’s been a little over a year since GitHub fired me.

I initially made a vague tweet about leaving the company, and then a few weeks later I wrote Fired, which made it pretty clear that leaving the company was involuntary.

The reaction to that post was pretty interesting. It hit 100,000 page views within the first few days after publishing, spurred 389 comments on Hacker News, and indeed, is currently the 131st most-upvoted story on Hacker News of all time.

Let me just say one thing first: it’s pretty goddamn weird to have so many people interested in discussing one of your biggest professional failures. There were a few hard-hitting Real Professional Journalists out there launching some bombs from the 90 yard line, too:

If an employer has decided to fire you, then you’ve not only failed at your job, you’ve failed as a human being.

and

Why does everyone feel compelled to live their life in the public? Shut up and sit down! You ain’t special, dear..

and

Who is the dude?

You and me both, buddy. I ask myself that every day.


The vast majority of the comments were truly lovely, though, as well as the hundreds of emails I got over the subsequent days. Over and over again it became obvious at how commonplace getting fired and getting laid off is. Everyone seemingly has a story about something they fucked up, or about someone that fucked them up. This is not a rare occurrence, and yet no one ever talks about it publicly.

As I stumbled through the rest of 2015, though, something that bothered me at the onset crept forward more and more: the post, much like the initial vague tweet, didn’t say anything. That was purposeful, of course; I was still processing what the whole thing meant to me, and what it could mean.

I’ve spent the last year constantly thinking about it over and over and over. I’ve also talked to hundreds and hundreds of people about the experience and about their experiences, ranging from the relatively unknown developer getting axed to executives getting pushed out of Fortune 500 companies.

It bothers me no one really talks about this. We come up with euphemisms, like “funemployment!” and “finding my next journey!”, while all the while ignoring the real pains associated with getting forced out of a company. And christ, there’s a lot of real pain that can happen.

How can we start fixing these problems if we can’t even talk about them?

Me speaking at Bath Ruby

I spoke this past week at Bath Ruby 2016, in Bath, England. The talk was about my experiences leaving GitHub, as well as the experiences of so many of the people I’ve talked to and studied over the last year. You can follow along with the slide deck if you’d like, or wait for the full video of the talk to come out in the coming weeks.

I also wanted to write a companion piece as well. There’s just a lot that can’t get shoehorned into a time-limited talk. That’s what you’re reading right now. So curl up by the fire, print out this entire thing onto like a bajillion pages of dead tree pulp, and prepare to read a masterpiece about firing people. Once you realize that you’re stuck with this drivel, you can toss the pages onto the fire and start reading this on your iPad instead.


The advice people most readily give out on this topic today is:

🚒🔥FIRE FAST 🔥🚒

“Fire fast”, they say! You have to fire fast because we’re moving really fuckin’ fast and we don’t have no time to deal with no shitty people draggin’ us down! Move fast and break people! Eat a big fat one, we’re going to the fuckin’ MOOOOOOOOON!

What the shit does that even mean, fire fast? Should I fire people four minutes after I hire them? That’ll show ‘em!

What about after a mistake? Should we fire people as retribution? Do people get second chances?

When we fire people, how do we handle things like continuity of insurance? Or details like taxes, stock, and follow-along communication? How do we handle security concerns when someone leaves an organization?

There’s a lot of advice that’s needed beyond fire fast. “Move fast and break people” doesn’t make any goddamn sense to me.

I’ve heard a lot of funny stories from people in the last year. From the cloud host employee who accidentally uploaded a pirated TV show to company servers and got immediately fired his second week on the job (“oops!” he remarked in hindsight) to the Apple employee who liked my initial post but “per company policy I’m not allowed to talk about why your post may or may not be relevant to me”.

I’ve also heard a lot of sad stories too. From someone whose board pushed them out of their own startup, but was forced to say they resigned for the sake of appearance:

There aren’t adjectives to explain the feeling when your baby tells you it doesn’t want/need you any more.

We might ask: why should we even care about this? They are ex-employees, after all. To quote from the seminal 1999 treatise on corporate technology management/worker relations, Office Space:

The answer, of course, is: we should care about all this because we’re human beings, dammit. How we treat employees, past and present, is a reflection on the company itself. Great companies care deeply about the relationship they maintain with everyone who has contributed to the success of the company.

This is kind of a dreary subject, but don’t worry too much: I’m going to aspire to make this piece as funny and as light-hearted as I can. It’s also going to be pretty long, but that’s okay, sometimes long things are worth it. (Haha dick joke, see? See what I’m doing here? God these jokes are going to doom us all.)

Perspectives

One last thing before we can finally ditch from these long-winded introductory sections: what you’re going to be reading is primarily my narrative, with support from many, many other stories hung off of the broader points.

Listen: I’m not super keen on doing this. I don’t particularly want to make this all about me, or about my experiences getting fired or quitting from any of my previous places of employment. This is a particularly depressing aspect in my life, and even a year later I’m still trying to cope with as much depression as anyone can really reasonably deal with.

But I don’t know how to talk about this in the abstract. The specifics are really where all the important details are. You need the specifics to understand the pain.

As such, this primarily comes at the problem from a specific perspective: an American living in San Francisco for a California-based tech startup.

When I initially wrote my first public “I’m fired!” post, some of you in more-civilized places with strong employee-friendly laws like Germany or France were aghast: who did I murder to get fired from my job? How many babies did I microwave to get to that point? Am I on a watchlist for even asking you that question?

California, though, is an at-will state. Employees can be fired for pretty much any reason. If your boss doesn’t like the color of shoes you’re wearing that day, BOOM! Fired. If they don’t like how you break down oxygen using your lungs in order to power your feeble human body, BOOM! Fired. Totally cool. As long as they’re not discriminating against federally-protected classes — religion, race, gender, disability, etc. — they’re in the clear.

Not all of you are working for companies like this. That’s okay — really, that’s great! — because I still think this touches on a lot of really broad points relevant to everyone. As I was building this talk out, I ended up noticing a ton of crossover with generally leaving a company, be it intentionally, unintentionally, on friendly terms, and on hostile terms. Chances are you’re not going to be at your company forever, so a lot of this is going to be helpful for you to start thinking about now, even if you ultimately don’t leave until years in the future.

Beyond that, I tried to target three different perspectives throughout all this, and I’ll call them out in separately-colored sections as well:

You

You: your perspective. If you ever end up in the hot seat and realize you’re about to get fired, this talk is primarily for you. There’s a lot of helpful hints for you to take into consideration in the moment, but also for the immediate future as well.

Company

Company: from the perspective of the employer. Again, the major thing I’m trying to get across is to normalize the idea of termination of employment. I’m not trying to demonize the employer at all, because there are a lot of things the employer can do to really help the new former employee out and to help the company out as well. I’ll make a note of them in these blocks.

Coworker

Coworker: the perspective that’s really not considered very much is the coworker’s perspective. Since they’re not usually involved in the termination itself, a lot of times it’s out of sight, out of mind. That’s a bit unfortunate, because there’s also some interesting aspects that can be helpful to keep in mind in the event that someone you work with gets fired.

Got it? Okay, let’s get into the thick of things.

Backstory

I’m Zach Holman. I was number nine at GitHub, and was there between 2010 and 2015. I saw it grow to 250 employees (they’ve since doubled in size and have grown to 500 in the last year).

I’m kind of at the extreme end of the spectrum when it comes to leaving a company, which can be helpful for others for the purposes of taking lessons away from an experience. It had been a company I had truly grown to love, and in many ways I had been the face of GitHub, as I did a lot of talks and blog posts that mentioned my experiences there. More than once I had been confusingly introduced as a founder or CEO of the company. That, in part, was how I ultimately was able to sneak into the Andreessen Horowitz corporate apartments and stayed there rent-free for sixteen months. I currently have twelve monogrammed a16z robes in my collection, and possibly was involved in mistakenly giving the greenlight to a Zenefits employee who came by asking if they could get an additional key to the stairwell for a… meeting.

Fast forward to summer of 2014: I had been the top committer to the main github/github repository for the last two years, I had just led the team that shipped one of the last major changes to the site, and around that time I had had a mid-year performance review with my manager that was pretty glowing and had resulted in me receiving one of the largest refresh grants they had given during that review period.

This feels a little self-congratulatory to write now, of course, but I’ll lend you a quick reminder: I did get fired nonetheless, ha. The point I’m trying to put across with all this babble is that on the surface, I was objectively one of the last employees one might think to get fired in the subsequent six months. But everyone’s really at risk: unless you own the company, the company owns you.

Around the start of the fall, though, I had started feeling pretty burnt out. I had started to realize that I hadn’t taken a vacation in five years. Sure, I’d been out of town, and I’d even ostensibly taken time off to have some “vacations”, but in hindsight they were really anything but: I’d still be checking email, I’d still be checking every single @github mention on Twitter, and I’d still dip into chat from time to time. Mentally, I would still be in the game. That’s a mistake I’ll never make again, because though I had handled it well for years — and even truly enjoyed it — it really does grind you down over time. Reading virtually every mention of your company’s name on Twitter for five straight years is exhausting.

By the time November came around, I was looking for a new long-term project to take on. I took a week offsite with three other long-tenured GitHubbers and we started to tackle a very large new product, but I think we were all pretty well burnt out by then. By the end of the week it was clear to me how fried I was; brainstorming should not have been that difficult.

I chatted with the CEO at this point about things. He’s always been pretty cognizant of the need for a good work/life balance, and encouraged taking an open-ended sabbatical away from work for awhile.

My preference would be for you to stay at GitHub […] When you came back would be totally up to you

By February, my manager had sent me an email with the following:

Before agreeing to your return […] we need to chat through some things

You

First thing here from your perspective is to be wary if the goalposts are getting moved on you. I’m not sure if there was miscommunication higher up with my particular situation, but in general things start getting dicey if there’s a set direction you need to head towards and that direction suddenly gets shifted.

After I got fired, I talked to one of my mentors about the whole experience. This is a benefit of finding mentors who have been through everything in the industry way before you even got there: they have that experience that flows pretty easily from them.

After relaying this story, my friend immediately laughed and said, “yeah, that’s exactly the moment when they started the process to fire you”. I kinda shrugged it off and suggested it was a right-hand-meet-left kinda thing, or maybe he was reading it wrong. He replied no, that is exactly the kind of email he had sent in the past when he was firing someone at one of his companies, and it was also the kind of email he had received right before he was fired in the past, too.

Be wary of any sudden goalposts, really. I’ll mention later on about PIPs — performance improvement plans — and how they can be really helpful to employees as well as to employers, but in general if someone’s setting you up with specific new guidelines for you to follow, you should take it with a critical eye.

At this point things were turning a tad surprising. By February, the first time I received an email from my manager about all this, I hadn’t been involved with the company at all for two months through my sabbatical, and I hadn’t even talked to my manager in four months, ever since she had decided that 1:1s weren’t really valuable between her and me. This was well and fine with me, since I had been assigned to a bit of a catch-all team where none of its members worked together on anything, and I was pretty comfortable moving around the organization and working with others in any case.

I was in Colorado at the time, but agreed to meet up and have a video chat about things. When I jumped on the call, I noticed that — surprise! — someone from HR was on the call as well.

Turns out, HR doesn’t normally join calls for fun. Really, I’m not sure anyone joins video chats for fun. So this should have the first thing that tickled my spidey-sense, but I kinda just tucked it in the back of my mind since I didn’t really have time to consider things much while the call was going on.

At this point, I was feeling pretty good about life again; the time off had left me feeling pretty stoked about building things again, and I had a long list of a dozen things I was planning on shipping in my first month back on the job. The call turned fairly confrontational off the bat, though; my manager kept asking how I felt, I said I felt pretty great and wanted to get to work, but that didn’t seem to really to be the correct answer. Things took a turn south and we went back-and-forth about things. This led to her calling me an asshole twice (in front of HR, again, who didn’t seem to mind).

In hindsight, yeah, I was probably a bit of an asshole; I tend to clam up during bits of confrontation that I hadn’t thought through ahead of time, and most of my responses were pretty terse in the affirmative rather than offering a ton of detail about my thoughts.

After the conversation had ended on a fairly poor note, I thought things through some more and found it pretty weird to be in a position with a superior who was outwardly fairly hostile to me, and I made my first major mistake: I talked to HR.

I was on really good terms with the head of HR, so the next day I sent an email to her making my third written formal request in the prior six months or so to be moved off of my team and onto another team. I had some thoughts on where I’d rather see myself, but really, any other team at that point I would have been happy with; I had pretty close working relationships with all of the rest of the managers at the company. On top of that, the team I was currently on didn’t have any association with each other, so I figured it wouldn’t be a big deal to switch to another arbitrary team.

The head of HR was really great, and found the whole situation to be a bit baffling. We started talking about which teams might make sense, and I asked around to a couple people as to whether they would be happy with a new refugee (they were all thumbs-up on the idea). She agreed to talk to some of the higher-ups about things, and we’d probably arrange a sit-down in person when I came back in a few days to SF to sort out the details.

You

Don’t talk to HR.

This pains me to say. I’ve liked pretty much every person in HR at all the companies I’ve worked for; certainly we don’t want to view them as the enemy.

But you have to look to their motivations, and HR exists only to protect the company’s interests. Naturally you should aim to be cordial if HR comes knocking and wants to talk to you, but going out of your way to bring something to the attention of HR is a risk.

Unfortunately, this is especially important to consider if you’re in a marginalized community. Many women in our industry, for example, have gone to HR to report sexual harassment and promptly found that they were the one who got fired. Similar stories exist in the trans community and with people who have had to deal with racial issues.

Ultimately it’s up to you whether you think HR at your company can be trusted to be responsible with your complaint, but it also might be worthwhile to consider alternative options as well (i.e., speaking with a manager if you think they’d be a strength in the dispute, exploring legal or criminal recourse, and so on).

HR is definitely a friend. But not to you.

Company

Avoid surprises. I’ve talked with a lot of former employees over the last year, and the ones with the most painful stories usually stem from being unceremoniously dropped into their predicament.

From a corporate perspective, it’s always painful to lose employees — regardless of the manner in which the employee leaves the company. But it’s almost always going to be more painful for the former employee, too.

I was out at a conference overseas a few years back with a few coworkers. One of my coworkers received a notice that he was to sit down on a video chat with the person he was reporting to at the time. He was fretting about it given the situation was a bit sudden and out of the ordinary, but I tried to soothe his fears, joking that they wouldn’t fire him right before an international conference that he was representing the company at. Sure enough, they fired him. Shows what I really knew about this stuff.

Losing your job is already tough. Dealing with it without a lot of lead-up to consider your options is even harder.

One of the best ways to tackle this is with a performance improvement plan, or PIP. Instituting a PIP is relatively straightforward: you tell the employee that they’re not really where you’d like to see them and that they’re in danger of losing their job, but you set clear goals so that the employee gets the chance at turning things around.

This is typically viewed as the company covering their ass so when they fire you it’s justified, but really I view it as a mutual benefit: it’s crystal-clear to the employee as to what they need to do to change their status in the organization. Sometimes they just didn’t know they were a low performer. Sometimes there are other problems in their life that impacted their performance, and it’s great to get that communication out there. Sometimes someone’s really not up to snuff, but they can at least spend some time preparing themselves prior to being shown the door.

The point is: surprise firings are the worst types of firings. It’s better for the company and for the employee to both be clear as to what their mutual expectations are. Then they can happily move forward from there.

At this point, I finished up my trip and flew back to San Francisco. It was time to chat in person.

Fired

I was fired before I entered the room.

You’re not going to be happy here. We need to move you out of the company.

That was the first thing that was said to me in the meeting between me, the CEO, and the head of HR. Not even sure I had finished sitting down, but I only needed a glance at the faces to know what was in the pipeline for this meeting.

You’re not going to be happy here is a bullshit phrase, of course, but not one that I have a lot of problems with in hindsight. My happiness has no impact on the company — my output does — but I think it was a helpful euphemism, at least.

You

Chill. The first thing I’d advise if you find yourself in the hot seat is to just chill out. I did that reasonably well, I think, by nodding, laughing, and giving each person in the room a hug before splitting. It was a pretty reasonable break, and I got to have a long chat with the head of HR immediately afterwards where we shot the shit about everything for awhile.

You ever watch soccer (or football, for you hipster international folk that still refuse to call it by its original name)? Dude gets a yellow card, and more often than not what does he do? Yells at the ref. Same for any sport, really. How many times does the ref say ah shit, sorry buddy, totally got it wrong, let me grab that card back? It just doesn’t happen.

That’s where you are in this circumstance. You can’t argue yourself back into a job, so don’t try to. At this point, just consider yourself coasting. If it’s helpful to imagine you’re a tiny alien controlling your humanoid form from inside your head a la the tiny outworlder in Men in Black, go for it.

My friend’s going through a particularly gnarly three- or four-weeks of getting fired from a company right now (don’t ask; it’s a disaster). This is the same type of advice I gave them: don’t feel like you need to make any statements or sign any legal agreements or make any decisions whatsoever while you’re in the room or immediately outside of it. If there’s something that needs your immediate attention, so be it, but most reasonable companies are going to give you some time to collect your thoughts, come up with a plan, and enact it instead of forcing you to sign something at gunpoint.

Remember: even if you’re really shit professionally, you’ll probably only get fired what, every couple of years? If you’re an average person what, maybe once a lifetime? Depending on the experience of management, the person firing you may deal with this situation multiple times a year. They’re better at it than you are, and they’re far less stressed out about it. I was in pretty good spirits at the time, but looking back I certainly wasn’t necessarily in my normal mindset.

Emotionally compromised

You’re basically like new-badass-Spock in the Star Trek reboot: you have been emotionally compromised; please note that shit in the ship’s log.

I’m still not fully certain why I got the axe; it was never made explicit to me. I asked other managers and those on the highest level of leadership, and everyone seemed be as confused as I was.

My best guess is that it’s Tall Poppy Syndrome, a phrase I was unfamiliar with until an Aussie told me about it. (Everything worthwhile in life I’ve learned from an Australian, basically.) The tallest poppy gets cut first.

With that, I don’t mean that I’m particularly talented or anything like that; I mean that I was the most obvious advocate internally for certain viewpoints, given how I’ve talked externally about how the old GitHub worked. In Japanese the phrase apparently translates to The tallest nail gets the hammer, which I think works better for this particular situation, heh. I had on occasion mentioned internally my misgivings about the lack of movement happening on any product development, and additionally the increasing unhappiness of many employees due to some internal policy changes and company growth.

Improving the product and keeping people happy are pretty important in my eyes, but I had declined earlier requests to move towards the management side of things, though, so primarily I was fairly heads-down on building stuff at that point rather than leading the charge for a lot of change internally. So maybe it was something else entirely; I’m not sure. I’m left with a lot of guesses.

Company

Lockdown. The first thing to do after — or even while — someone is fired is to start locking down their access to everything. This is pretty standard to remove liability from any bad actors. Certainly the vast majority of people will never be a problem, but it’s also not insulting or anything from a former employee standpoint, either. (It’s preferred, really: if I’ve very recently been kicked out of a company, I’d really like to be removed from production access as soon as possible so I don’t even have to worry about accidentally breaking something after my tenure is finished, for example. It’s best for everyone.)

From a technical standpoint, you should automate the process of credential rolling as much as possible. All the API keys, passwords, user accounts, and other credentials should be regenerated and replaced in one fell swoop.

Automate this because, well, as you grow, more people are inherently going to leave your company, and streamlining this process is going to make it easier on everyone. No one gets up in the morning, jumps out of bed, throws open the curtains and yells out: OH GOODIE! I GET TO FIRE MORE PEOPLE TODAY AND CHANGE CONFIG VALUES FOR THE NEXT EIGHT HOURS! THANK THE MAKER!

Ideally this should be as close to a single console command or chat command as possible. If you’re following twelve-factor app standards, your config values should already be stored in the environment rather than tucked deep into code constants. Swap them out, and feel better about yourself while you have to perform a pretty dreary task.

Understand the implications of what you’re doing, though. I remember hearing a story from years back of someone getting let go from a company. Sure, that sucks, but what happened next was even worse: the firee had just received their photos back from their recent wedding, so they tossed them into their Dropbox. At the time, Dropbox didn’t really distinguish between personal and corporate accounts, and all the data was kind of mixed together. When the person was let go, the company removed access to the corporate Dropbox account, which makes complete sense, of course. Unfortunately that also deleted all their wedding photos. Basically like salt in an open wound. Dropbox has long since fixed this problem by better splitting up personal and business accounts, but it’s still a somewhat amusing story of what can go wrong if there’s not a deeper understanding of the implications of cutting off someone’s access.

Understand the real-world implications as well. Let’s take a purely hypothetical, can’t-possibly-have-happened-in-real-life example of this.

Does your company:

  • Give out RFID keyfobs instead of traditional metal keys in order to get into your office?
  • Does your office have multiple floors?
  • Do you disable the employee’s keyfob at the exact same time they’re getting fired?
  • Do you, for the sake of argument, also require keyfob access inside your building to access individual floors?
  • Is it possible — just possible at all, stay with me here — that the employee was fired on the third floor?
  • And is it possible that the employee would then go down to the second floor to collect their bag?
  • Is it at all possible that you’ve locked your newly-fired former employee INTO THE STAIRWELL, unable to enter the second floor, instead having to awkwardly text a friend they knew would be next to the door with a very unfortunate HI CAN YOU UNLOCK THE SECOND FLOOR DOOR FOR ME SINCE MY KEYFOB DOESN’T WORK PROBABLY BECAUSE I JUST GOT FIRED HA HA HA YEAH THAT’S A THING NOW WE SHOULD CHAT.

Totally hypothetical situation.

Yeah, totally was me. It was hilarious. I was laughing for a good three minutes while someone got up to grab the door.

Anyway, think about all of these implications. Particularly if the employee loses access to their corporate email account; many times services like healthcare, stock information, and payroll information may be tied to that email address, and that poses even more problems for the former employee.

This also underscores the benefit of keeping a cordial relationship between the company and the former employee. When I was fired, I found I still had access to a small handful of internal apps whose OAuth tokens weren’t getting rolled properly. I shot an email to the security team, so hopefully they were invalidated and taken care for future former employees.

Although now that I think about it, I still have access to the analytics for many of GitHub’s side properties; I’ve been unable to get a number of different people to pull the plug for me. I think instead I’ll just say it’s a clear indicator of the trust my former employer has in my relationship with them. :heart:

One last thing to add in this section. My friend Reg tweeted this recently:

I really like this sentiment a lot, and will keep it in mind when I’m in that position next. Occasionally you’ll see the odd person mention something about this over Twitter or something, and it’s clear that firing someone is a stressful process. But be careful who you vent that stress to — vent up the chain of command, not down — because do keep in mind that you’re still not the one suffering the most from all this.

Coworker

Determine the rationale. Once someone’s actually been fired, this is really your first opportunity as a coworker to have some involvement in the process. Certainly you’re not aiming to butt in and try to be the center of everything, here, but there’s some things you can keep in mind to help your former coworker, your company, and ultimately, yourself.

Determining the rationale I think is the natural first step. You’re no help to anyone if you get fired as well. And sometimes — but obviously not always — if someone you work with gets fired, it could pose problems for you too, particularly if you work on the same team.

Ask around. Your direct manager is a great place to start if you have a good relationship with them. You don’t necessarily need to invade the firee’s privacy and pry into every single detail, but I think it’s reasonable to ask if the project you’re working on is possibly going to undertake a restructuring, or if it might get killed, or any number of other things. Don’t look desperate, of course — OH MY GOD ARE WE ALL GOING TO GET SHITCANNED???? — but a respectful curiosity shouldn’t hurt in most healthy organizations.

Gossip is a potential next step. Everyone hates on gossip, true, but I think it can have its place for people who aren’t in management positions. Again, knowing every single detail isn’t really relevant to you, but getting the benchmark of people around you on your level can be helpful for you to judge your own position. It also might be helpful as a sort of mea culpa when you talk to your manager, as giving them a perspective from the boots on the ground, so to speak, might be beneficial for them when judging the overall health of the team.

Company

Be truthful internally. Jumping back to the employer’s side of things, just be sure to be truthful. Again, the privacy of your former employee’s experience is very important to keep, but how to talk about it to other employees can be pretty telling.

Be especially cautious when using phrases like mutually agreed. Very few departures are mutually-agreed upon. If they were thinking of leaving, there’s a good chance they’d have already left.

In my case, my former manager emailed her team and included this sentence:

We had a very honest and productive conversation with Zach this morning and decided it was best to part ways.

There certainly wasn’t any conversation, and the sentence implies that it was a mutual decision. She wasn’t even in the room, either, so the we is a bit suspect as well, ha.

In either case, I was already out the door, so it doesn’t bother me very much. But everyone in the rank-and-file are better-networked than you are as a manager, and communication flows pretty freely once an event happens. So be truthful now, otherwise you poison the well for future email announcements. Be a bit misleading today and everyone will look at you as being misleading in the future.

The last bit to consider is group firing: firing more than one person on the same day. This is a very strong signal, and it’s up to you as to what you’re trying to signal here. If you take a bunch of scattered clear under-performers and fire them all on the same day, then the signal might be that the company is cleaning up and is focused squarely on improving problems. If the decision appears rather arbitrary, you run the risk of signaling that firing people is also arbitrary, and your existing employees might be put in a pretty stressful situation when reflecting on their own jobs.

Firing is tough. If you’ve ever done it before you know it’s not necessarily just about the manager and the employee: it can impact a lot more people than that.

So, I was fired. I walked out of the room, got briefly locked inside the office stairwell, and then walked to grab my stuff.

After

What next?

It’s a tough question. At this point I was kind of on auto-pilot, with the notion of being fired not really settling out in my mind yet.

I went to where my stuff was and started chatting with my closer friends. (I wasn’t escorted out of the building or any of that silliness.)

I started seeing friendly faces walk by and say hi, since in many cases I hadn’t seen or talked to most of my coworkers in months, having never come back in an official capacity from my sabbatical. I immediately took to walking up to them, giving them a long, deeply uncomfortable and lingering hug, and then whispering in their ear: it was very nice working with you. also I just got fired. It was a pretty good troll given such short notice, all things considered. We all had a good laugh, and then people stuck around so they could watch me do it to someone else. By the end I had a good dozen or so people around chatting and avoiding work. A+++ time, would do again.

lol jesus just realized what I typed, god no, I’d probably avoid getting fired the next time, I mean. I’m just pretty dope at trolling is all I’m sayin’.

Egregious selfie of the author

Eventually I walked out of the office and starting heading towards tacos, where I was planning on drinking way too many margaritas with a dear friend who was still at the company (for the time being). Please note: tacos tend to solve all problems. By this point, the remote workers had all heard the news, so my phone started blowing up with text messages. I was still feeling pretty good about life, so I took this selfie and started sending it to people in lieu of going into a ton of detail with each person about my mental state.

In prepping this talk, I took a look at this selfie for the first time in quite a number of months and noticed I was wearing earbuds. Clearly I was listening to something as I strutted out of the office. Luckily I scrobble my music to Last.fm, so I can go back and look. So that’s how I found out what I was listening to:

Eponine

On My Own, as sung by Eponine in the award-winning musical Les Misérables. Shit you not. It’s like I’m some emo fourteen-year-old just discovering their first breakup or something. Nice work, Holman.

Shortly thereafter, I tweeted the aforementioned tweet:

Again, it’s pretty vague and didn’t address whether I had quit or I’d been fired. I was pretty far away from processing things. I think being evasive made some sense at the time.

I’ve been journaling every few days pretty regularly for a few years now, and it’s one of the best things I’ve ever done for myself. I definitely wrote a really long entry for myself that day. I went back and took a look while I was preparing this talk, and this section jumped out at me:

The weird part is how much this is about me. This is happening to me right now. I didn’t really expect it to feel so intimate, a kind of whoa, this is my experience right now and nobody else’s.

In hindsight, yeah, that’s absolutely one of the stronger feelings I still feel from everything. When you think about it, most of the experiences you have in life are shared with others: join a new job, share it with your new coworkers. Get married, share it with your new partner and your friends and family. Best I can tell, getting fired and dying are one of the few burdens that are yours and yours alone. I didn’t really anticipate what that would feel like ahead of time.

By later in the night, I was feeling pretty down. It was definitely a roller coaster of a day: text messages, tweets, margaritas, financial advisors, lawyers, introspective walks in the park. I didn’t necessarily think I’d be flying high for the rest of my life, but it didn’t really make the crash all that easier, either. And that experience has really matched my last year, really: some decent highs, some pretty dangerous lows. Five years being that deeply intertwined in a company is toeing a line, and I’ve been paying for it ever since.

Loose Ends

Good god, it really takes an awful amount of work in order to leave work.

There’s a number of immediate concerns you need to deal with:

  • Who owns your physical hardware? Is your computer owned by the company? Your phone? Any other devices? Do you need to wipe any devices, or pull personal data off of any of them?
  • Do you have any outstanding expenses to deal with? I had a conference to Australia in a few subsequent weeks that I had to deal with. I had told them that GitHub would pay for my expenses to attend, but I hadn’t booked that trip yet. Luckily it was no problem for GitHub to pick up the tab (I was still representing the company there, somewhat awkwardly), but it was still something else I needed to remember to handle right away.
  • How’s your healthcare situation, if you’re unfortunate enough to live in a country where healthcare Is A Thing. In the US, COBRA exists to help provide continuity of health insurance between jobs, and it should cover you during any gaps in your coverage. It was one more thing to have to worry about, although admittedly I was pleasantly surprised at how (relatively) easy using COBRA was; I was expecting to jump through some really horrible hoops.

The next thing to consider is severance pay. Each company tends to handle things differently here, and at least in the US, there’s not necessarily a good standard of what to expect in terms of post-termination terms and compensation.

There’s a lot of potential minefields involved in dealing with the separation agreement needed to agree upon severance, though.

Unfortunately I can’t go into much detail here other than say that we reached an equitable agreement, but it did take a considerable amount of time to get to that point.

One of the major general concerns when a worker leaves an American-based startup is the treatment of their stock options. A large part of equity compensation takes place in the form of ISOs, which offer favorable tax treatments in the long term.

Unfortunately, vested unexercised ISOs are capped at 90 days post-employment by law, meaning that they disappear in a puff of smoke once you reach that limit. This poses a problem in today’s anti-IPO startups who simultaneously reject secondary sales, which limit all of the options available for an employee to exercise their stock (the implications of which for an early employee might cost hundreds of thousands of dollars that they don’t have, excluding the corresponding tax hit as well).

Another possibility that’s quickly gaining steam lately is to convert those ISOs to NSOs at the 90 day mark and extend the option window to something longer like seven or ten years instead of a blistering 90 days. In my mind, companies who haven’t switched to a longer 90 day window are actively stealing from their employees; the employees have worked hard to vest their options over a period of years, but because of their participation in the company’s success they’re now unable to exercise their options.

I’ve talked a lot about this in greater length in my aptly-titled post, Fuck Your 90 Day Exercise Window, as well as started a listing of employee-friendly companies with extended exercise windows. Suffice to say, this is a pretty important aspect to me and was a big topic in the discussions surrounding my separation agreement.

I had been talking to various officials in leadership for a few months hammering out the details and had been under the impression that we had reached an agreement, but I was surprised to find out that wasn’t the case. I was informed 28 hours before my 90 day window closed that the agreement I had thought I had didn’t exist; it was then that I realized I had 28 hours to either come up with hundreds of thousands of dollars that I didn’t have to save half of my stock, or I could sign the agreement as-is and avoid losing half of my already-diminished stake. I opted to sign.

You

Get everything in writing. This also supports my earlier point of aiming to not do anything in the room while you’re getting fired; it allows you to take some time out and think things through once you have the legalese in front of you (and preferably in front of a lawyer).

I think it’s fully acceptable to stay on-the-record. So no phone calls, no meetings in person. Again, you’re up against people who have done this frequently in the past, and it’s a good chance these thoughts haven’t crossed your mind before.

A lot of it certainly might not even be malicious; I’d imagine a lot of people you chat with could be good friends who want to see you leave in good shape, but at the end of the day it’s really dicey to assume the company as a whole is deeply looking out for your interests. The only person looking out for your interests is you.

This also underlines the generally great advice of always knowing a good lawyer, a good accountant, and a good financial advisor. You don’t necessarily have to be currently engaged with a firm; just knowing who to ask for recommendations is a great start. If you can take some time and have some introductory calls with different firms ahead of time, that’s even better. The vast majority of legal and financial firms will be happy to take a quick introductory phone call with you free-of-charge to explain their value proposition. This is highly advantageous for you to do ahead of time so you don’t need to do this when you’re deep in the thick of a potential crisis.

All things considered, though, we did reach an agreement and I was officially free and clear of the company.

Life after

That brings us to the last few months and up to the present. I’ve spent the last year or so trying to sort out my life and my resulting depression. Shit sucks. Professionally I’ve done some consulting and private talks here and there, which have been tepidly interesting. I’ve also served in a formal advisory role to three startups, which I’ve really come to enjoy; after being so heads-down on a single problem for the last five years, it’s nice to get a fair amount of depth in multiple new problem spaces, some of which are new to me.

But I still haven’t found the next thing I’m really interested in, which just feeds into the whole cycle some more. For better or worse, that’ll be changing pretty quickly, since I’m pretty broke after working part-time and living in San Francisco for so long. Even though I helped move a company’s valuation almost two billion dollars, I haven’t made a dime from the company outside of making a pretty below-to-average salary. That’s after six years.

Think on that, kids, when you’re busting your ass day and night to strike it rich with your startup dreams.

Coworker

It’s cool to stay in touch. Something that’s kind of cracked me up lately is the sheer logistics behind keeping in touch with my former coworkers. On one hand, you lose out on your normal chat conversations, lunches, and in-person meetings with these colleagues. It’s just a human trait that it’s harder to keep these relationships up when they’re out of sight, out of mind.

Beyond that, though, when you’re out of the company you’re also out of the rolodex. You might not know someone’s phone number or personal email address anymore, for example. A large part of the time you, as a coworker, might be in a bit better position to reach out to a former colleague than they are to you, since you still have access to these infrastructures. It’s possible someone would be up for a chat, but the difficulty in doing so provides a bit of a barrier, so it’s fine to reach out and say hi sometimes! Even the worst corporate breakups that I’ve heard about are usually able to insulate between bad experiences with the company versus bad experiences with you, so you shouldn’t be too worried about that if you weren’t directly involved.

The one aspect about all of this that you might want to keep in mind that I’ve heard crop up again and again from a number of former employees is around the idea of conversational topics.

In some sense I think it’s natural for existing employees to vent to former employees that may have left on bad terms about the gossip that’s happening at the company. To take an example from my own experiences, I don’t think there’s anyone else on the planet that knows more dirt on GitHub than I do at this point, even including current employees. I’m certain I gave two to three times as many 1:1s than anyone else at the company in the subsequent months following my departure; I think I was a natural point of contact to many who were frustrated at some internal aspects of the company they were dealing with.

And that’s fine, to an extent; schadenfreude is a thing, and it can be helpful for awhile, for both parties. But man, it gets tiring, particularly when you’re not paid for it. Especially when you’re still suffering from feelings from it. It’s hard to move on when every day there’s something new to trigger it all over again.

So don’t be afraid to be cautious with what you say. If they’re up to hearing new dirt, so be it; if they’re a bit fried about it, chat about your new puppy instead. Everyone loves puppies.

One of the very bright points from all of this is the self-organized GitHub alumni network. Xubbers, we call ourselves. We have a private Facebook group and a private Slack room to talk about things. It’s really about 60% therapy, 20% shooting the shit just like the old days, and 20% networking and supporting each other as we move forward in our new careers apart.

I can’t underline how much I’ve appreciated this group. In the past I’ve kept in contact with coworkers from previous points of employment, but I hadn’t worked somewhere with enough former employees to necessarily warrant a full alumni group.

Highly recommend pulling a group like this together for your own company. On a long enough timescale, you’re all going to join our ranks anyway. Unless you die first. Then we’ll mount your head on the wall like in a private hunter’s club or something. “The one that almost got away”, we’ll call it.

Xubber meetup

In some sense, I think alumni really continue the culture of the company, independent of what changes may or may not befall the company itself.

One of my favorite stories about all this lately is from Parse. Unfortunately, the circumstances around it aren’t super happy: after being acquired by Facebook, Parse ultimately was killed off last month.

The Parse alumni, though, got together last month to give their beloved company a proper send-off:

No funeral would be complete, though, without a cake. (I’m stretching the metaphor here, but that’s okay, just roll with it.) Parse’s take on the cake involved an upside-down Facebook “like” button, complete with blood:

The most important part of a company is the lasting mark they leave on the world. That mark is almost always the people. Chances are, your people aren’t going to be at your company forever. You want them to move on and do great things. You want them to carry with them the best parts of your culture on to new challenges, new companies, and new approaches.

Once you see that happening, then you can be satisfied with the job you’ve done.

Company

Cultivate the relationship with your alumni. Immediately after parting ways with an employee, there will be a number of important aspects that will require a lot of communication: healthcare, taxes, stock, and so on. So that type of follow-on communication is important to keep in mind.

There are plenty of longer-term relationships to keep in mind as well, though. Things like help with recruiting referrals, potential professional relationships with the former employee’s new company, and other bidirectional ways to help each other in general. It’s good to support those lines of communication.

One way to help this along is to simply provide an obvious point of contact. Having something like an alumni@ email address available is a huge benefit. Otherwise it becomes a smorgasbord of playing guess-the-email-account, which causes problems for your current employees as well. Just set up an alumni@ email alias to forward emails from and keep it up-to-date through any changes in your organizational side of things.

The last thing to consider is that your alumni are a truly fantastic source of recruiting talent. Most employment terminations are either voluntary (i.e., quitting) or at least on fairly good terms. There are plenty of reasons to leave a job for purposes unrelated to your overall opinion of the company: maybe you’re moving to a different city, or you’re taking a break from work to focus on your kids, or you simply want to try something new. You can be an advocate for your former employer without having to continue your tenure there yourself.

And that’s a good thing. Everyone wants to be the one who helps their friend find a new job. That’s one of the best things you can do for someone. If the company treated them well, they can treat the company well by helping to staff it with good people.

If the company has a poor relationship with former employees, however, one can expect that relationship to go both ways. And nothing is a stronger signal for prospective new hires than to talk to former employees and get their thoughts on the situation.

Next

It’s not your company. If you don’t own the company, the company owns you.

That’s really been a hard lesson for me. I was pretty wrapped up in working there. It’s a broader concept, really, shoved down our throats in the tech industry. Work long hours and move fast. Here, try on this company hoodie. Have this catered lunch so you don’t have to go out into the real world. This is your new home. The industry is replete with this stuff.

One of my friends took an interesting perspective:

I always try to leave on a high note. Because once you’re there, you’re never going to hit that peak again.

What she was getting at is that I think you’ll know. You’ll know the difference between doing far and away your best work, and doing work that is still good, but just nominally better than what you’ve been doing. Once you catch yourself adjusting to that incremental progression… maybe it’s time to leave, to change things up. Just thought that was interesting.

One of my favorite conversations I’ve had recently was with Ron Johnson. Ron was in charge of rolling out the Apple Store: everything from the Genius Bar to the physical setup to how the staff operated. He eventually left Apple and became the CEO at JC Penny, one of the large stalwart department stores in the United States. Depending on who you ask, he either revolutionized what department stores could be but ran out of time to see the changes bear fruit, or seriously jeopardized JC Penny’s relationship with its customers by putting them through some new changes.

In either case, there had been some discussions internally and he had agreed to resign. A few days later, the board went ahead and very publicly fired him instead.

We chatted about this, and he said something that I really think helped clarify my opinion on everything:

There’s nothing wrong with moving along… regardless of whether it is self-driven or company-driven. Maybe we need new language… right now it’s either we resign or get fired.

Maybe there’s a third concept which is “next”.

Maybe we should simply recognize it’s time for next.

I like that sentiment.

Firing people is a normal function in a healthy, growing company. The company you start at might end up very distinctly different by the time you leave it. Or you might be the one who does the changing. Life’s too nuanced to make these blanket assumptions when we hear about someone getting fired.

Talk about it. If not publicly, then talk openly with your friends and family about things. I don’t know much, but I do know we can’t start fixing and improving this process if we continue to push the discussions to dark alleyways of our minds.

When I finished this talk in the UK last week, I was kind of nervous about how many in the audience could really identify with aspects that I was describing. Shortly after the conference finished up we went to the conference after-party and I was showered with story after story of bad experiences, good experiences, and just overall experiences, from people who hadn’t really been able to talk frankly about these topics before. It was pretty humbling. So many people have stories.

Thanks for reading my story.

What’s next?

News stories from Tuesday 01 March, 2016

Favicon for Zach Holman 01:00 How to Deploy Software » Post from Zach Holman Visit off-site link

How to
Deploy Software

Make your team’s deploys as boring as hell and stop stressing about it.

Let's talk deployment

Whenever you make a change to your codebase, there's always going to be a risk that you're about to break something.

No one likes downtime, no one likes cranky users, and no one enjoys angry managers. So the act of deploying new code to production tends to be a pretty stressful process.

It doesn't have to be as stressful, though. There's one phrase I'm going to be reiterating over and over throughout this whole piece:

Your deploys should be as boring, straightforward, and stress-free as possible.

Deploying major new features to production should be as easy as starting a flamewar on Hacker News about spaces versus tabs. They should be easy for new employees to understand, they should be defensive towards errors, and they should be well-tested far before the first end-user ever sees a line of new code.

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

A lot of this piece stems from both my experiences during my five-year tenure at GitHub and during my last year of advising and consulting with a whole slew of tech companies big and small, with an emphasis on improving their deployment workflows (which have ranged from "pretty respectable" to "I think the servers must literally be on fire right now"). In particular, one of the startups I'm advising is Dockbit, whose product is squarely aimed at collaborating on deploys, and much of this piece stems from conversations I've had with their team. There's so many different parts of the puzzle that I thought it'd be helpful to get it written down.

I'm indebted to some friends from different companies who gave this a look-over and helped shed some light on their respective deploy perspectives: Corey Donohoe (Heroku), Jesse Toth (GitHub), Aman Gupta (GitHub), and Paul Betts (Slack). I continually found it amusing how the different companies might have taken different approaches but generally all focused on the same underlying aspects of collaboration, risk, and caution. I think there's something universal here.

Anyway, this is a long intro and for that I'd apologize, but this whole goddamn writeup is going to be long anyway, so deal with it, lol.

Table of Contents

  1. Goals

    Aren't deploys a solved problem?

  2. Prepare

    Start prepping for the deploy by thinking about testing, feature flags, and your general code collaboration approach.

  3. Branch

    Branching your code is really the fundamental part of deploying. You're segregating any possible unintended consequences of the new code you're deploying. Start thinking about different approaches involved with branch deploys, auto deploys on master, and blue/green deploys.

  4. Control

    The meat of deploys. How can you control the code that gets released? Deal with different permissions structures around deployment and merges, develop an audit trail of all your deploys, and keep everything orderly with deploy locks and deploy queues.

  5. Monitor

    Cool, so your code's out in the wild. Now you can fret about the different monitoring aspects of your deploy, gathering metrics to prove your deploy, and ultimately making the decision as to whether or not to roll back your changes.

  6. Conclusion

    "What did we learn, Palmer?"
    "I don't know, sir."
    "I don't fuckin' know either. I guess we learned not to do it again."
    "Yes, sir."

How to Deploy Software was originally published on March 1, 2016.

Goals

Aren't deploys a solved problem?

If you’re talking about the process of taking lines of code and transferring them onto a different server, then yeah, things are pretty solved and are pretty boring. You’ve got Capistrano in Ruby, Fabric in Python, Shipit in Node, all of AWS, and hell, even FTP is going to stick around for probably another few centuries. So tools aren’t really a problem right now.

So if we have pretty good tooling at this point, why do deploys go wrong? Why do people ship bugs at all? Why is there downtime? We’re all perfect programmers with perfect code, dammit.

Obviously things happen that you didn’t quite anticipate. And that’s where I think deployment is such an interesting area for small- to medium-sized companies to focus on. Very few areas will give you a bigger bang for your buck. Can you build processes into your workflow that anticipate these problems early? Can you use different tooling to help collaborate on your deploys easier?

This isn't a tooling problem; this is a process problem.

The vast, vast majority of startups I've talked to the last few years just don't have a good handle on what a "good" deployment workflow looks like from an organizational perspective.

You don't need release managers, you don't need special deploy days, you don't need all hands on deck for every single deploy. You just need to take some smart approaches.

Prepare

Start with a good foundation

You've got to walk before you run. I think there's a trendy aspect of startups out there that all want to get on the coolest new deployment tooling, but when you pop in and look at their process they're spending 80% of their time futzing with the basics. If they were to streamline that first, everything else would fall in place a lot quicker.

Tests

Testing is the easiest place with which to start. It's not necessarily part of the literal deployment process, but it has a tremendous impact on it.

There's a lot of tricks that depend on your language or your platform or your framework, but as general advice: test your code, and speed those tests up.

My favorite quote about this was written by Ryan Tomayko in GitHub's internal testing docs:

We can make good tests run fast but we can't make fast tests be good.

So start with a good foundation: have good tests. Don't skimp out on this, because it impacts everything else down the line.

Once you start having a quality test suite that you can rely upon, though, it's time to start throwing money at the problem. If you have any sort of revenue or funding behind your team, almost the number one area you should spend money on is whatever you run your tests on. If you use something like Travis CI or CircleCI, run parallel builds if you can and double whatever you're spending today. If you run on dedicated hardware, buy a huge server.

The amount of benefit I've seen companies gain by moving to a faster test suite is one of the most important productivity benefits you can earn, simply because it impacts iteration feedback cycles, time to deploy, developer happiness, and inertia. Throw money at the problem: servers are cheap, developers are not.

I made an informal Twitter poll asking my followers just how fast their tests suite ran. Granted, it's hard to account for microservices, language variation, the surprising amount of people who didn't have any tests at all, and full-stack vs quicker unit tests, but it still became pretty clear that most people are going to be waiting at least five minutes after a push to see the build status:

How fast should fast really be? GitHub's tests generally ran within 2-3 minutes while I was there. We didn't have a lot of integration tests, which allowed for relatively quick test runs, but in general the faster you can run them the faster you're going to have that feedback loop for your developers.

There are a lot of projects around aimed at helping parallelize your builds. There's parallel_tests and test-queue in Ruby, for example. There's a good chance you'll need to write your tests differently if your tests aren't yet fully independent from each other, but that's really something you should be aiming to do in either case.

Feature Flags

The other aspect of all this is to start looking at your code and transitioning it to support multiple deployed codepaths at once.

Again, our goal is that your deploys should be as boring, straightforward, and stress-free as possible. The natural stress point of deploying any new code is running into problems you can't foresee, and you ultimately impact user behavior (i.e., they experience downtime and bugs). Bad code is going to end up getting deployed even if you have the best programmers in the universe. Whether that bad code impacts 100% of users or just one user is what's important.

One easy way to handle this is with feature flags. Feature flags have been around since, well, technically since the if statement was invented, but the first time I remember really hearing about a company's usage of feature flags was Flickr's 2009 post, Flipping Out.

These allow us to turn on features that we are actively developing without being affected by the changes other developers are making. It also lets us turn individual features on and off for testing.

Having features in production that only you can see, or only your team can see, or all of your employees can see provides for two things: you can test code in the real world with real data and make sure things work and "feel right", and you can get real benchmarks as to the performance and risk involved if the feature got rolled out to the general population of all your users.

The huge benefit of all of this means that when you're ready to deploy your new feature, all you have to do is flip one line to true and everyone sees the new code paths. It makes that typically-scary new release deploy as boring, straightforward, and stress-free as possible.

Provably-correct deploys

As an additional step, feature flags provide a great way to prove that the code you're about to deploy won't have adverse impacts on performance and reliability. There's been a number of new tools and behaviors in recent years that help you do this.

I wrote a lot about this a couple years back in my companion written piece to my talk, Move Fast and Break Nothing. The gist of it is to run both codepaths of the feature flag in production and only return the results of the legacy code, collect statistics on both codepaths, and be able to generate graphs and statistical data on whether the code you're introducing to production matches the behavior of the code you're replacing. Once you have that data, you can be sure you won't break anything. Deploys become boring, straightforward, and stress-free.

Move Fast Break Nothing screenshot

GitHub open-sourced a Ruby library called Scientist to help abstract a lot of this away. The library's being ported to most popular languages at this point, so it might be worth your time to look into this if you're interested.

The other leg of all of this is percentage rollout. Once you're pretty confident that the code you're deploying is accurate, it's still prudent to only roll it out to a small percentage of users first to double-check and triple-check nothing unforeseen is going to break. It's better to break things for 5% of users instead of 100%.

There's plenty of libraries that aim to help out with this, ranging from Rollout in Ruby, Togglz in Java, fflip in JavaScript, and many others. There's also startups tackling this problem too, like LaunchDarkly.

It's also worth noting that this doesn't have to be a web-only thing. Native apps can benefit from this exact behavior too. Take a peek at GroundControl for a library that handles this behavior in iOS.


Feeling good with how you're building your code out? Great. Now that we got that out of the way, we can start talking about deploys.

Branch

Organize with branches

A lot of the organizational problems surrounding deployment stems from a lack of communication between the person deploying new code and the rest of the people who work on the app with her. You want everyone to know the full scope of changes you're pushing, and you want to avoid stepping on anyone else's toes while you do it.

There's a few interesting behaviors that can be used to help with this, and they all depend on the simplest unit of deployment: the branch.

Code branches

By "branch", I mean a branch in Git, or Mercurial, or whatever you happen to be using for version control. Cut a branch early, work on it, and push it up to your preferred code host (GitLab, Bitbucket, etc).

You should also be using pull requests, merge requests, or other code review to keep track of discussion on the code you're introducing. Deployments need to be collaborative, and using code review is a big part of that. We'll touch on pull requests in a bit more detail later in this piece.

Code Review

The topic of code review is long, complicated, and pretty specific to your organization and your risk profile. I think there's a couple important areas common to all organizations to consider, though:

  • Your branch is your responsibility. The companies I've seen who have tended to be more successful have all had this idea that the ultimate responsibility of the code that gets deployed falls upon the person or people who wrote that code. They don't throw code over the wall to some special person with deploy powers or testing powers and then get up and go to lunch. Those people certainly should be involved in the process of code review, but the most important part of all of this is that you are responsible for your code. If it breaks, you fix it… not your poor ops team. So don't break it.

  • Start reviews early and often. You don't need to finish a branch before you can request comments on it. If you can open a code review with imaginary code to gauge interest in the interface, for example, those twenty minutes spent doing that and getting told "no, let's not do this" is far preferable than blowing two weeks on that full implementation instead.

  • Someone needs to review. How you do this can depend on the organization, but certainly getting another pair of eyes on code can be really helpful. For more structured companies, you might want to explicitly assign people to the review and demand they review it before it goes out. For less structured companies, you could mention different teams to see who's most readily available to help you out. In either end of the spectrum, you're setting expectations that someone needs to lend you a hand before storming off and deploying code solo.

Branch and deploy pacing

There's an old joke that's been passed around from time to time about code review. Whenever you open a code review on a branch with six lines of code, you're more likely to get a lot of teammates dropping in and picking apart those six lines left and right. But when you push a branch that you've been working on for weeks, you'll usually just get people commenting with a quick 👍🏼 looks good to me!

Basically, developers are usually just a bunch of goddamn lazy trolls.

You can use that to your advantage, though: build software using quick, tiny branches and pull requests. Make them small enough to where it's easy for someone to drop in and review your pull in a couple minutes or less. If you build massive branches, it will take a massive amount of time for someone else to review your work, and that leads to a general slow-down with the pace of development.

Confused at how to make everything so small? This is where those feature flags from earlier come into play. When my team of three rebuilt GitHub Issues in 2014, we had shipped probably hundreds of tiny pull requests to production behind a feature flag that only we could see. We deployed a lot of partially-built components before they were "perfect". It made it a lot easier to review code as it was going out, and it made it quicker to build and see the new product in a real-world environment.

You want to deploy quickly and often. A team of ten could probably deploy at least 7-15 branches a day without too much fretting. Again, the smaller the diff, the more boring, straightforward, and stress-free your deploys become.

Branch deploys

When you're ready to deploy your new code, you should always deploy your branch before merging. Always.

View your entire repository as a record of fact. Whatever you have on your master branch (or whatever you've changed your default branch to be) should be noted as being the absolute reflection of what is on production. In other words, you can always be sure that your master branch is "good" and is a known state where the software isn't breaking.

Branches are the question. If you merge your branch first into master and then deploy master, you no longer have an easy way to determining what your good, known state is without doing an icky rollback in version control. It's not necessarily rocket science to do, but if you deploy something that breaks the site, the last thing you want to do is have to think about anything. You just want an easy out.

This is why it's important that your deploy tooling allows you to deploy your branch to production first. Once you're sure that your performance hasn't suffered, there's no stability issues, and your feature is working as intended, then you can merge it. The whole point of having this process is not for when things work, it's when things don't work. And when things don't work, the solution is boring, straightforward, and stress-free: you redeploy master. That's it. You're back to your known "good" state.

Auto-deploys

Part of all that is to have a stronger idea of what your "known state" is. The easiest way of doing that is to have a simple rule that's never broken:

Unless you're testing a branch, whatever is deployed to production is always reflected by the master branch.

The easiest way I've seen to handle this is to just always auto-deploy the master branch if it's changed. It's a pretty simple ruleset to remember, and it encourages people to make branches for all but the most risk-free commits.

There's a number of features in tooling that will help you do this. If you're on a platform like Heroku, they might have an option that lets you automatically deploy new versions on specific branches. CI providers like Travis CI also will allow auto deploys on build success. And self-hosted tools like Heaven and hubot-deploy — tools we'll talk about in greater detail in the next section — will help you manage this as well.

Auto-deploys are also helpful when you do merge the branch you're working on into master. Your tooling should pick up a new revision and deploy the site again. Even though the content of the software isn't changing (you're effectively redeploying the same codebase), the SHA-1 does change, which makes it more explicit as to what the current known state of production is (which again, just reaffirms that the master branch is the known state).

Blue-green deploys

Martin Fowler has pushed this idea of blue-green deployment since his 2010 article (which is definitely worth a read). In it, Fowler talks about the concept of using two identical production environments, which he calls "blue" and "green". Blue might be the "live" production environment, and green might be the idle production environment. You can then deploy to green, verify that everything is working as intended, and make a seamless cutover from blue to green. Production gains the new code without a lot of risk.

One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production.

This is a pretty powerful idea, and it's become even more powerful with the growing popularity of virtualization, containers, and generally having environments that can be easily thrown away and forgotten. Instead of having a simple blue/green deployment, you can spin up production environments for basically everything in the visual light spectrum.

There's a multitude of reasons behind doing this, from having disaster recovery available to having additional time to test critical features before users see them, but my favorite is the additional ability to play with new code.

Playing with new code ends up being pretty important in the product development cycle. Certainly a lot of problems should be caught earlier in code review or through automated testing, but if you're trying to do real product work, it's sometimes hard to predict how something will feel until you've tried it out for an extended period of time with real data. This is why blue-green deploys in production are more important than having a simple staging server whose data might be stale or completely fabricated.

What's more, if you have a specific environment that you've spun up with your code deployed to it, you can start bringing different stakeholders on board earlier in the process. Not everyone has the technical chops to pull your code down on their machine and spin your code up locally — and nor should they! If you can show your new live screen to someone in the billing department, for example, they can give you some realistic feedback on it prior to it going out live to the whole company. That can catch a ton of bugs and problems early on.

Heroku Pipelines

Whether or not you use Heroku, take a look at how they've been building out their concept of "Review Apps" in their ecosystem: apps get deployed straight from a pull request and can be immediately played with in the real world instead of just being viewed through screenshots or long-winded "this is what it might work like in the future" paragraphs. Get more people involved early before you have a chance to inconvenience them with bad product later on.

Control

Controlling the deployment process

Look, I'm totally the hippie liberal yuppie when it comes organizational manners in a startup: I believe strongly in developer autonomy, a bottom-up approach to product development, and generally will side with the employee rather than management. I think it makes for happier employees and better product. But with deployment, well, it's a pretty important, all-or-nothing process to get right. So I think adding some control around the deployment process makes a lot of sense.

Luckily, deployment tooling is an area where adding restrictions ends up freeing teammates up from stress, so if you do it right it's going to be a huge, huge benefit instead of what people might traditionally think of as a blocker. In other words, your process should facilitate work getting done, not get in the way of work.

Audit trails

I'm kind of surprised at how many startups I've seen unable to quickly bring up an audit log of deployments. There might be some sort of papertrail in a chat room transcript somewhere, but it's not something that is readily accessible when you need it.

The benefit of some type of audit trail for your deployments is basically what you'd expect: you'd be able to find out who deployed what to where and when. Every now and then you'll run into problems that don't manifest themselves until hours, days, or weeks after deployment, and being able to jump back and tie it to a specific code change can save you a lot of time.

A lot of services will generate these types of deployment listings fairly trivially for you. Amazon CodeDeploy and Dockbit, for example, have a lot of tooling around deploys in general but also serves as a nice trail of what happened when. GitHub's excellent Deployment API is a nice way to integrate with your external systems while still plugging deploy status directly into Pull Requests:

GitHub's deployment API

If you're playing on expert mode, plug your deployments and deployment times into one of the many, many time series databases and services like InfluxDB, Grafana, Librato, or Graphite. The ability to compare any given metric and layer deployment metrics on top of it is incredibly powerful: seeing a gradual increase of exceptions starting two hours ago might be curious at first, but not if you see an obvious deploy happen right at that time, too.

Deploy locking

Once you reach the point of having more than one person in a codebase, you're naturally going to have problems if multiple people try to deploy different code at once. While it's certainly possible to have multiple branches deployed to production at once — and it's advisable, as you grow past a certain point — you do need to have the tooling set up to deal with those deploys. Deploy locking is the first thing to take a look at.

Deploy locking is basically what you'd expect it to be: locking production so that only one person can deploy code at a time. There's many ways to do this, but the important part is that you make this visible.

The simplest way to achieve this visibility is through chat. A common pattern might be to set up deploy commands that simultaneously lock production like:

/deploy <app>/<branch> to <environment>

i.e.,

/deploy api/new-permissions to production

This makes it clear to everyone else in chat that you're deploying. I've seen a few companies hop in Slack and mention everyone in the Slack deploy room with @here I'm deploying […]!. I think that's unnecessary, and only serves to distract your coworkers. By just tossing it in the room you'll be visible enough. If it's been awhile since a deploy and it's not immediately obvious if production is being used, you can add an additional chat command that returns the current state of production.

There's a number of pretty easy ways to plug this type of workflow into your chat. Dockbit has a Slack integration that adds deploy support to different rooms. There's also an open source option called SlashDeploy that integrates GitHub Deployments with Slack and gives you this workflow as well (as well as handling other aspects like locking).

Another possibility that I've seen is to build web tooling around all of this. Slack has a custom internal app that provides a visual interface to deployment. Pinterest has open sourced their web-based deployment system. You can take the idea of locking to many different forms; it just depends on what's most impactful for your team.

Once a deploy's branch has been merged to master, production should automatically unlock for the next person to use.

There's a certain amount of decorum required while locking production. Certainly you don't want people to wait to deploy while a careless programmer forgot they left production locked. Automatically unlocking on a merge to master is helpful, and you can also set up periodic reminders to mention the deployer if the environment had been locked for longer than 10 minutes, for instance. The idea is to shit and get off the pot as soon as possible.

Deploy queueing

Once you have a lot of deployment locks happening and you have a lot of people on board deploying, you're obviously going to have some deploy contention. For that, draw from your deepest resolve of Britishness inside of you, and form a queue.

A deploy queue has a couple parts: 1) if there's a wait, add your name to the end of the list, and 2) allow for people to cut the line (sometimes Really Important Deploys Need To Happen Right This Minute and you need to allow for that).

The only problem with deploy queueing is having too many people queued to deploy. GitHub's been facing this internally the last year or so; come Monday when everybody wants to deploy their changes, the list of those looking to deploy can be an hour or more long. I'm not particularly a microservices advocate, but I think deploy queues specifically see a nice benefit if you're able to split things off from a majestic monolith.

Permissions

There's a number of methods to help restrict who can deploy and how someone can deploy.

2FA is one option. Hopefully your employee's chat account won't get popped, and hopefully they have other security measures turned on their machine (full disk encryption, strong passwords, etc.). But for a little more peace of mind you can require a 2FA process to deploy.

You might get 2FA from your chat provider already. Campfire and Slack, for example, both support 2FA. If you want it to happen every time you deploy, however, you can build a challenge/response process into the process. Heroku and Basecamp both have a process like that internally, for instance.

Another possibility to handle the who side of permissions is to investigate what I tend to call, "riding shotgun". I've seen a number of companies who have either informal or formal processes or tooling for ensuring that at least one senior developer is involved in every deploy. There's no reason you can't build out a 2FA-style process like that into a chat client, for example, requiring both the deployer and the senior developer that's riding shotgun to confirm that code can go out.

Monitor

Admire and check your work

Once you've got your code deployed, it's time to verify that what you just did actually did what you did intend it to do.

Check the playbook

All deploys should really hit the exact same game plan each time, no matter if it's a frontend change or a backend change or anything else. You're going to want to check to see if the site is still up, if the performance took a sudden turn for the worse, if error rates started elevating, or if there's an influx of new support issues. It's to your advantage to streamline that game plan.

If you have multiple sources of information for all of these aspects, try putting a link to each of these dashboards in your final deploy confirmation in chat, for example. That'll remind everyone every time to look and verify they're not impacting any metrics negatively.

Ideally, this should all be drawn from one source. Then it's easier to direct a new employee, for example, towards the important metrics to look at while making their first deploy. Pinterest's Teletraan, for example, has all of this in one interface.

Metrics

There's a number of metrics you can collect and compare that will help you determine whether you just made a successful deploy.

The most obvious, of course, is the general error rate. Has it dramatically shot up? If so, you probably should redeploy master and go ahead and fix those problems. You can automate a lot of this, and even automate the redeploy if the error rate crosses a certain threshold. Again, if you assume the master branch is always a known state you can roll back to, it makes it much easier to automate auto-rollbacks if you trigger a slew of exceptions right after deploy.

The deployments themselves are interesting metrics to keep on-hand as well. Zooming out over the last year or so can help give you a good example of whether you're scaling the development pace up, or if it's clear that there's some problems and things are slowing down. You can also take a step further and collect metrics on who's doing the deploying and, though I haven't heard of anyone do this explicitly yet, tie error rates back to deployer and develop a good measurement of who are reliable deployers on the team.

Post-deploy cleanup

The final bit of housework that's required is the cleanup.

The slightly aggressively-titled Feature Toggles are one of the worst kinds of Technical Debt talks a bit about this. If you're building things with feature flags and staff deployments, you run the risk of complicating the long-term sustainability of your codebase:

The plumbing and scaffolding logic to support branching in code becomes a nasty form of technical debt, from the moment each feature switch is introduced. Feature flags make the code more fragile and brittle, harder to test, harder to understand and maintain, harder to support, and less secure.

You don't need to do this right after a deploy; if you have a bigger feature or bugfix that needs to go out, you'll want to spend your time monitoring metrics instead of immediately deleting code. You should do it at some point after the deploy, though. If you have a large release, you can make it part of your shipping checklist to come back and remove code maybe a day or a week after it's gone out. One approach I liked to take was to prepare two pull requests: one that toggles the feature flag (i.e., ships the feature to everyone), and one that cleans up and removes all the excess code you introduced. When I'm sure that I haven't broken anything and it looks good, I can just merge the cleanup pull request later without a lot of thinking or development.

You should celebrate this internally, too: it's the final sign that your coworker has successfully finished what they were working on. And everyone likes it when a diff is almost entirely red. Removing code is fun.

Deleted branch

You can also delete the branch when you're done with it, too. There's nothing wrong with deleting branches when you're done with them. If you're using GitHub's pull requests, for example, you can always restore a deleted branch, so you'll benefit from having it cleared out of your branch list but you won't actually lose any data. This step can also be automated, too: periodically run a script that looks for stale branches that have been merged into master, and then delete those branches.

Neato

The whole ballgame

I only get emotional about two things: a moving photo of a Golden Retriever leaning with her best friend on top of a hill overlooking an ocean looking towards a beautiful sunset, and deployment workflows. The reason I care so much about this stuff is because I really do think it's a critical part of the whole ballgame. At the end of the day, I care about two things: how my coworkers are feeling, and how good the product I'm working on is. Everything else stems from those two aspects for me.

Deployments can cause stress and frustration, particularly if your company's pace of development is sluggish. It also can slow down and prevent you from getting features and fixes out to your users.

I think it's worthwhile to think about this, and worthwhile to improve your own workflows. Spend some time and get your deploys to be as boring, straightforward, and stress-free as possible. It'll pay off.

Written by Zach Holman. Thanks for reading.

If you liked this, you might like some of the other things I've written. If you didn't like this, well, they're not all winners.

Did reading this leave you with questions, or do you have anything you'd like to talk about? Feel free to drop by my ask-me-anything repository on GitHub and file a new issue so we can chat about it in the open with other people in the community.

I hope we eventually domesticate sea otters.

News stories from Thursday 28 January, 2016

Favicon for Zach Holman 01:00 Startup Interviewing is Fucked » Post from Zach Holman Visit off-site link

Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.

Most startups have zero users and are a glimmer of the successful business they might wind up being some day. But we’re still romanticizing the idea that programming riddles will magically be the best benchmark for hiring, even though technology is very rarely the cause for any given startup’s success.

Know what you need

There’s such a wild gulf between what gets asked in interviews and what gets done in the gig’s daily grind that it’s a wonder how startups make it out of the initial incubation phase in the first place.

I’m a product engineer. I don’t have a formal CS background, but I build things for the web, and I’m really good at it. Not once in the last ten months that I’ve on-and-off interviewed have I ever seen anything remotely close to a view or a controller or even a model. Not every company has insisted upon using programming riddles as a hiring technique, but the ones that do almost exclusively focus on weird algorithmic approaches to problems that don’t exist in the real world.

Interviewer: How would you write a method to do this operation?

Me: writes a one-liner in Ruby

Interviewer: Okay now what if you couldn’t use the standard library? Imagine it’s a 200GB file and you have to do it all in memory in Ruby.

Me: Why the fuck would I do that?

Certainly there are some jobs where being extremely performant and algorithmically “correct” are legitimate to interview against. But look around: how many small, less-than-50-person startups are doing work like that? The dirty secret is most startups for the first few years are glorified CRUD apps, and finding well-rounded and diverse people who can have the biggest impact tend to be the ones who are comfortable wearing a lot of hats.

My favorite few tweets from this week talked about this:

Worry more about whether you’re self-selecting the wrong people into your organization.

Power dynamics

A huge problem with all this is that it creates a power dynamic that virtually all but assures that people who are bad at technical interviews will fail.

Algorithm-based challenges typically come from a place where the interviewer, in all their self-aggrandizing smugness, comes up with something they think demonstrates cleverness. A reliable bet is to try solving it with recursion from the start; it’s goddamn catnip for interviewers. If that doesn’t work, try doing it all in one pass rather than in an O(n) operation, because the extra 1ms you save in this use case will surely demonstrate your worth to the organization.

When you come at it from this perspective, you’re immediately telling your prospective coworker than “I have a secret that only I know right now, and I want you to arrive at this correct answer.” It becomes stressful because there is a correct answer.

Every single product I’ve built in my professional career has not had a correct answer. It’s more akin to carving a statue out of marble: you have a vague understanding of what you want to see, but you have to continually chip away at it and refine it until you end up with one possible result. You arrive at the answer, together, with your teammates. You don’t sit on a preconceived answer and direct your coworker to slug through it alone.

Collaborate

This is why I so strongly advocate for pair programming at some point in the interview process. Take an hour and knock off whatever bug or feature you were going to work on together. Not happening to be doing anything interesting today? The bug is too “boring”? Cool, then why are you working on it? If it’s representative of the real work that the candidate will face in the job, then it’s good enough to interview on. Besides, you can learn a lot from someone even in the simplest of fixes.

Build something real together. The very act of doing that entirely changes the power dynamic; I cannot stress that enough. Whereas previously you had someone struggling to find out a secret only you were initially privy to, you’re now working together on a problem neither of you have a firm answer to yet. Before it was adversarial; now it’s collaborative. It’ll put your candidate at ease, and they’ll be able to demonstrate their skillset to you much easier.

No one has any idea what they’re doing

I’ve heard — and experienced — so many things happening in tech interviews that are just bonkers.

You have stories from people like Max Howell who get rejected from jobs ostensibly because he’s not a good enough developer to whiteboard out algorithms, even though he built one of most popular tools for software developers today.

I interviewed for a director of engineering role last year for a startup with famously massive growth that had fundamental problems with their hundreds of developers not being able to get any product shipped. I had a good discussion with their CEO and CTO about overhauling their entire process, CI, deployment, and management structure, and then when I went in for the final round of interviews for this non-programming leadership role the interviews were done almost entirely by junior developers who asked me beginner JavaScript questions. It just boggles my mind.


Look, I get it. It takes time and effort to interview someone, and most of you just want to get back to building stuff. Coming up with a standard question lets you get away with doing more with less effort, and gives you a modicum of an ability for comparison across different candidates.

But really take a long look at whether this selects the right candidates. The skill set needed for most early startups — particularly of early employees — is a glorious, twisted mess of product, code, marketing, design, communication, and empathy. Don’t filter out those people by doing what a Microsoft or an Apple does. They’re big companies, and let me be the first to tell you: that ain’t you right now. You have different priorities.

It’s more work, but it makes for better companies and better hires, in my opinion. But what do I know; I failed those fucking tests anyway.

News stories from Friday 08 January, 2016

Favicon for Zach Holman 01:00 Fuck Your 90 Day Exercise Window » Post from Zach Holman Visit off-site link

There are a lot of problems with the compensation we give early employees at startups. I don’t know how to fix all of them, but one obvious area to start directing our anger towards is something we can fix relatively quickly: the customary 90 day exercise window.

90 days and poof

Most startups give you a 90 day window to exercise your vested options once you leave the company — either through quitting or through termination — or all of your unexercised options vanish.

This creates a perverse incentive for employees not to grow the company too much.

For example: say you’re employee number one at A Very Cool Startup, and, through your cunning intellect and a lot of luck and a lot of help from your friends, you manage to help grow the company to the pixie fairy magic dragon unicorn stage: a billion dollar valuation. Cool! You’re totes gonna be mad rich.

I climbed the bridge lol

Ultimately, you end up leaving the company. Maybe the company’s outgrown you, or you’re bored after four years, or your spouse got a new job across the country, or you’ve been fired, or maybe you die, or hey, none of your business I just want out dammit. The company’s not public, though, so everything becomes trickier. With a 90 day exercise window, you now have three months to raise the money to pay to exercise your options and the additional tax burdens associated with exercising, otherwise you get nothing. In our imaginary scenario, that could be tens or hundreds of thousands of dollars. And remember: you’re a startup worker, so there’s a good chance you’ve been living off a smaller salary all along!

So you’re probably stuck. Either you fork out enough dough yourself on a monumentally risky investment, sell them on the secondary market (which most companies disallow post-Facebook IPO), give up a portion of equity in some shady half-sale-loan thing to various third parties, or forfeit the options entirely.

I mean, you did what you were supposed to: you helped grow that fucking company. And now, in part because of your success, it’s too expensive to own what you had worked hard to vest? Ridiculous.

Solutions

How we got here wasn’t necessarily malicious. These 90 day exercise windows can likely be tied back to ISOs terminating, by law, at 90 days. NSOs came along for the ride. This was less problematic when we had a somewhat more liquid marketplace for employee equity. With IPOs taking much longer to happen combined with companies restricting sale on the secondary market, these 90 days have completely stifled the tech worker’s ability to even hold the equity they’ve earned, much less profit from it.

There’s a relatively easy solution: convert vested ISOs to nonquals and extend the exercise window from 90 days to something longer. Pinterest is moving to seven years (in part by converting ISOs to nonquals). Sam Altman suggests ten years. In either case, those are both likely long enough timespans for other options to arise for you: the company could go public (in which case you can sell shares on the open market to handle the tax hit), the company could fail (in which case you’re not stuck getting fucked over paying hundreds of thousands of dollars for worthless stock, which can even happen in a “successful” acquisition), you could become independently wealthy some other way, or the company could get acquired and you gain even more outs.

Naturally, modifying the stock agreement is a solution that only companies can take. So what can you, the humble worker bee, do?

The new norm

We need to encourage companies to start taking steps towards correcting the problems we see today. I want to see more employees able to retain the compensation they earned. I want to see this become the norm.

My friend’s trying to adopt some employee-friendly terms in the incorporation of his third startup, and he mentioned this to me specifically:

You have no idea how hard it’s been to try something different. Even tried to get a three year vest for my employees, because I think four years is a bullshit norm, and lawyers mocked me for 15 minutes. Said it would make my company uninvestable.

The more companies we can get shifting to these employee-friendly terms, bit by bit, the easier it is for everyone else to accept these as the norm. Start the conversation with prospective employers. Write and tweet about your own experiences. Ask your leadership if they’ll switch over.

Clap for ‘em

One final, important part is to applaud the companies doing it right, and to promote them amongst the startup community.

I just created a repository at holman/extended-exercise-windows that lists out companies who have extended their exercise windows. If you’re interested in working for a company that takes a progressive, employee-friendly stance on this, give it a look. If you’re a company who’s switched to a longer exercise window, please contribute! And if you’re at a company that currently only does 90 day exercise windows, give them a friendly heads-up, and hopefully we can add them soon enough.

You have 90 days to do this, and then I’m deleting the repo.

Just kidding.

News stories from Tuesday 01 December, 2015

Favicon for Fabien Potencier 00:00 Announcing 24 Days of Blackfire » Post from Fabien Potencier Visit off-site link

I still remember the excitement I had 15 years ago when I discovered my first programming advent calendar; it was one about Perl. It was awesome, and every year, I was waiting for another series of blog posts about great Perl modules. When I open-sourced symfony1, I knew that writing an advent calendar would help adoption; Askeet was indeed a great success and the first advent calendar I was heavily involved with. I wrote another one, Jobeet, for symfony 1.4 some years later.

And today, I'm very happy to announce my third advent calendar, this one is about Blackfire. This time, the goal is different though: in this series, I won't write an application, but instead, I'm going to look at some development best practices which includes topics like profiling, performance, testing, continuous integration, and my vision on performance optimization best practices.

I won't reveal more about the content of the 24 days as the point is for you to discover a new chapter day after day, but I can already tell you that I have some great presents for you... just one small clue: it's about Open-Sourcing something. I'm going to stop this blog post now before I tell you too much!

Enjoy the first installment for now as it has just been published.

News stories from Wednesday 02 September, 2015

Favicon for Grumpy Gamer 08:00 Happy Birthday Monkey Island » Post from Grumpy Gamer Visit off-site link

I guess Monkey Island turns 25 this month. It’s hard to tell.

mi_title_ega.jpg

Unlike today, you didn’t push a button and unleash your game to billions of people. It was a slow process of sending “gold master” floppies off to manufacturing, which was often overseas, then waiting for them to be shipped to stores and the first of the teaming masses to buy the game.

Of course, when that happened, you rarely heard about it. There was no Internet for players to jump onto and talk about the game.

There was CompuServe and Prodigy, but those catered to a very small group of very highly technical people.

Lucasfilm’s process for finalizing and shipping a game consisted of madly testing for several months while we fixed bugs, then 2 weeks before we were to send off the gold masters, the game would go into “lockdown testing”.  If any bug was found, there was a discussion with the team and management about if it was worth fixing.  “Worth Fixing” consisted of a lot of factors, including how difficult it was to fix and if the fix would likely introduce more bugs.

Also keep in mind that when I made a new build, I didn't just copy it to the network and let the testers at it, it had to be copied to four or five sets of floppy disk so it could be installed on each tester’s machine.  It was a time consuming and dangerous process. It was not uncommon for problems to creep up when I made the masters and have to start the whole process again. It could take several hours to make a new set of five testing disks.

It’s why we didn’t take getting bumped from test lightly.

During the 2nd week of “lockdown testing”, if a bug was found we had to bump the release date. We required that each game had one full week of testing on the build that was going to be released. Bugs found during this last week had to be crazy bad to fix.

When the release candidate passed testing, it would be sent off to manufacturing. Sometimes this was a crazy process. The builds destined for Europe were going to be duplicated in Europe and we needed to get the gold master over there, and if anything slipped there wasn’t enough time to mail them. So, we’d drive down to the airport and find a flight headed to London, go to the gate and ask a passenger if they would mind carry the floppy disks for us and someone would meet them at the gate.

Can you imagine doing that these days? You can’t even get to the gate, let alone find a person that would take a strange package on a flight for you. Different world.

floppies.jpg

After the gold masters were made, I’d archive all the source code. There was no version control back then, or even network storage, so archiving the source meant copying it to a set of floppy disks.

I made these disk on Sept 2nd, 1990 so the gold masters were sent off within a few days of that.  They have a 1.1 version due to Monkey Island being bumped from testing. I don’t remember if it was in the 1st or 2nd week of “lockdown”.

It hard to know when it first appeared in stores. It could have been late September or even October and happened without fanfare.  The gold masters were made on the 2nd, so that what I'm calling The Secret of Monkey Island's birthday.

MI1_island_small.jpg

Twenty Five years. That’s a long time.

It amazes me that people still play and love Monkey Island. I never would have believed it back then.

It’s hard for me to understand what Monkey Island means to people. I am always asked why I think it’s been such an enduring and important game. My answer is always “I have no idea.”

I really don’t.

I was very fortunate to have an incredible team. From Dave and Tim to Steve Purcell, Mark Ferrari, an amazing testing department and everyone else who touched the game's creation. And also a company management structure that knew to leave creative people alone and let them build great things.

award.jpg

Monkey Island was never a big hit. It sold well, but not nearly as well and anything Sierra released. I started working on Monkey Island II about a month after Monkey Island I went to manufacturing with no idea if the first game was going to do well or completely bomb. I think that was part of my strategy: start working on it before anyone could say “it’s not worth it, let's go make Star Wars games”.

There are two things in my career that I’m most proud of. Monkey Island is one of them and Humongous Entertainment is the other. They have both touched and influenced a lot of people. People will tell me that they learned english or how to read from playing Monkey Island. People have had Monkey Island weddings. Two people have asked me if it was OK to name their new child Guybrush. One person told me that he and his father fought and never got along, except for when they played Monkey Island together.

It makes me extremely proud and is very humbling.

I don’t know if I will ever get to make another Monkey Island. I always envisioned the game as a trilogy and I really hope I do, but I don’t know if it will ever happen. Monkey Island is now owned by Disney and they haven't shown any desire to sell me the IP. I don’t know if I could make Monkey Island 3a without complete control over what I was making and the only way to do that is to own it. Disney: Call me.

Maybe someday. Please don’t suggest I do a Kickstarter to get the money, that’s not possible without Disney first agreeing to sell it and they haven’t done that.

Anyway…

Happy Birthday to Monkey Island and a huge thanks to everyone who helped make it great and to everyone who kept it alive for Twenty Five years.

fan_letter.jpgfan_pic1b.jpg

fan_letter2c.jpg

I thought I'd celebrate the occasion by making another point & click adventure, with verbs.

News stories from Saturday 04 July, 2015

Favicon for Fabien Potencier 23:00 "Create your Own Framework" Series Update » Post from Fabien Potencier Visit off-site link

Three years ago, I published a series of articles about how to create a framework on top of the Symfony components on this blog.

Along the years, its contents have been updated to match the changes in Symfony itself but also in the PHP ecosystem (like the introduction of Composer). But those changes were made on a public Github repository, not on this blog.

As this series has proved to be popular, I've decided a few months ago to move it to the Symfony documentation itself where it would be more exposed and maintained by the great Symfony doc team. It was a long process, but it's done now.

Enjoy the new version in a dedicated documentation section, "Create your PHP Framework", on symfony.com.

News stories from Wednesday 24 June, 2015

Favicon for the web hates me 09:00 Projektwerkstatt: SecurityGraph » Post from the web hates me Visit off-site link

Ich arbeite für einen großen Verlag und wir haben sicherlich 500 Softwarekomponenten um Einsatz. Das Meiste davon ist wahrscheinlich PHP. Viel Symfony, Symfony2, Drupal, WordPress. Ihr kennt die üblichen Verdächtigen. Die Hauptframworks aufzuzählen fällt keinem von uns schwer, dummerweise wissen wir aber eigentlich gar nicht so wirklich, was wir alles noch nebenher am Laufen haben. […]

The post Projektwerkstatt: SecurityGraph appeared first on the web hates me.

News stories from Tuesday 23 June, 2015

Favicon for the web hates me 08:00 Projektwerkstatt: getYourFoundation.io » Post from the web hates me Visit off-site link

Tag zwei unserer kleinen Kreativreihe. Gestern ging es um eine tiefere Integration von Twitter in WordPress und heute wird es wieder ein wenig technischer. Aber erstmal von vorne. In der letzten Zeit hatte ich mal wieder das Glück ein wenig zu programmieren. Seitdem ich Teamleiter bin, komme ich leider nicht mehr so oft dazu, was […]

The post Projektwerkstatt: getYourFoundation.io appeared first on the web hates me.

News stories from Monday 22 June, 2015

Favicon for the web hates me 13:00 Projektwerkstatt – twitter@wp » Post from the web hates me Visit off-site link

Fangen wir also mit dem ersten Teil der Projektwerkstatt-Woche an. Die Idee ist schon wenig älter, aber wie ich finde immer noch gut. Wie ihr ja wisst, sind wir mit unserem Blog auch auf Twitter. Ganze 1431 Follower können wir mit stolz aufzählen. Zusätzlich setzen wir auf WordPress auf, auch wenn mir die Technik dahinter […]

The post Projektwerkstatt – twitter@wp appeared first on the web hates me.

Favicon for the web hates me 08:45 Woche der Projektideen » Post from the web hates me Visit off-site link

Los geht es mit einem kurzen Beitrag, also vielmehr einer Ankündigung. Ich habe letzte Woche mal wieder die Zeit gehabt einige meiner Geschäftsideen aufzuschreiben und da ich sie, wie so oft nicht, nicht alle selber umsetzen kann, stelle ich sie euch vor und vielleicht findet sich ja ein Team, das Bock drauf hat. Ihr werdet […]

The post Woche der Projektideen appeared first on the web hates me.

News stories from Friday 19 June, 2015

Favicon for nikic's Blog 01:00 Internal value representation in PHP 7 - Part 2 » Post from nikic's Blog Visit off-site link

In the first part of this article, high level changes in the internal value representation between PHP 5 and PHP 7 were discussed. As a reminder, the main difference was that zvals are no longer individually allocated and don’t store a reference count themselves. Simple values like integers or floats can be stored directly in a zval, while complex values are represented using a pointer to a separate structure.

The additional structures for complex zval values all use a common header, which is defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
            ZEND_ENDIAN_LOHI_3(
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;
};

This header now holds the refcount, the type of the value and cycle collection info (gc_info), as well as a slot for type-specific flags.

In the following the details of the individual complex types will be discussed and compared to the previous implementation in PHP 5. One of the complex types are references, which were already covered in the previous part. Another type that will not be covered here are resources, because I don’t consider them to be interesting.

Strings

PHP 7 represents strings using the zend_string type, which is defined as follows:

struct _zend_string {
    zend_refcounted   gc;
    zend_ulong        h;        /* hash value */
    size_t            len;
    char              val[1];
};

Apart from the refcounted header, a string contains a hash cache h, a length len and a value val. The hash cache is used to avoid recomputing the hash of the string every time it is used to look up a key in a hashtable. On first use it will be initialized to the (non-zero) hash.

If you’re not familiar with the quite extensive lore of dirty C hacks, the definition of val may look strange: It is declared as a char array with a single element - but surely we want to store strings longer than one character? This uses a technique called the “struct hack”: The array is declared with only one element, but when creating the zend_string we’ll allocate it to hold a larger string. We’ll still be able to access the larger string through the val member.

Of course this is technically undefined behavior, because we end up reading and writing past the end of a single-character array, however C compilers know not to mess with your code when you do this. C99 explicitly supports this in the form of “flexible array members”, however thanks to our dear friends at Microsoft, nobody needing cross-platform compatibility can actually use C99.

The new string type has some advantages over using normal C strings: Firstly, it directly embeds the string length. This means that the length of a string no longer needs to be passed around all over the place. Secondly, as the string now has a refcounted header, it is possible to share a string in multiple places without using zvals. This is particularly important for sharing hashtable keys.

The new string type also has one large disadvantage: While it is easy to get a C string from a zend_string (just use str->val) it is not possible to directly get a zend_string from a C string – you need to actually copy the string’s value into a newly allocated zend_string. This is particularly inconvenient when dealing with literal strings (constant strings occurring in the C source code).

There are a number of flags a string can have (which are stored in the GC flags field):

#define IS_STR_PERSISTENT           (1<<0) /* allocated using malloc */
#define IS_STR_INTERNED             (1<<1) /* interned string */
#define IS_STR_PERMANENT            (1<<2) /* interned string surviving request boundary */

Persistent strings use the normal system allocator instead of the Zend memory manager (ZMM) and as such can live longer than one request. Specifying the used allocator as a flag allows us to transparently use persistent strings in zvals, while previously in PHP 5 a copy into the ZMM was required beforehand.

Interned strings are strings that won’t be destroyed until the end of a request and as such don’t need to use refcounting. They are also deduplicated, so if a new interned string is created the engine first checks if an interned string with the given content already exists. All strings that occur literally in PHP source code (this includes string literals, variable and function names, etc) are usually interned. Permanent strings are interned strings that were created before a request starts. While normal interned strings are destroyed on request shutdowns, permanent strings are kept alive.

If opcache is used interned strings will be stored in shared memory (SHM) and as such shared across all PHP worker processes. In this case the notion of permanent strings becomes irrelevant, because interned strings will never be destroyed.

Arrays

I will not talk about the details of the new array implementation here, as this is already covered in a previous article. It’s no longer accurate in some details due to recent changes, but all the concepts are still the same.

There is only one new array-related concept I’ll mention here, because it is not covered in the hashtable post: Immutable arrays. These are essentially the array equivalent of interned strings, in that they don’t use refcounting and always live until the end of the request (or longer).

Due to some memory management concerns, immutable arrays are only used if opcache is enabled. To see what kind of difference this can make, consider the following script:

for ($i = 0; $i < 1000000; ++$i) {
    $array[] = ['foo'];
}
var_dump(memory_get_usage());

With opcache the memory usage is 32 MiB, but without opcache usage rises to a whopping 390 MB, because each element of $array will get a new copy of ['foo'] in this case. The reason an actual copy is done here (instead of a refcount increase) is that literal VM operands don’t use refcounting to avoid SHM corruption. I hope we can improve this currently catastrophic case to work better without opcache in the future.

Objects in PHP 5

Before considering the object implementation in PHP 7, let’s first walk through how things worked in PHP 5 and highlight some of the inefficiencies: The zval itself used to store a zend_object_value, which is defined as follows:

typedef struct _zend_object_value {
    zend_object_handle handle;
    const zend_object_handlers *handlers;
} zend_object_value;

The handle is a unique ID of the object which can be used to look up the object data. The handlers are a VTable of function pointers implementing various behaviors of an object. For “normal” PHP objects this handler table will always be the same, but objects created by PHP extensions can use a custom set of handlers that modifies the way it behaves (e.g. by overloading operators).

The object handle is used as an index into the “object store”, which is an array of object store buckets defined as follows:

typedef struct _zend_object_store_bucket {
    zend_bool destructor_called;
    zend_bool valid;
    zend_uchar apply_count;
    union _store_bucket {
        struct _store_object {
            void *object;
            zend_objects_store_dtor_t dtor;
            zend_objects_free_object_storage_t free_storage;
            zend_objects_store_clone_t clone;
            const zend_object_handlers *handlers;
            zend_uint refcount;
            gc_root_buffer *buffered;
        } obj;
        struct {
            int next;
        } free_list;
    } bucket;
} zend_object_store_bucket;

There’s quite a lot of things going on here. The first three members are just some metadata (whether the destructor of the object was called, whether this bucket is used at all and how many times this object was visited by some recursive algorithm). The following union distinguishes the case where the bucket is currently used or whether it is part of the bucket free list. Important for use is the case where struct _store_object is used:

The first member object is a pointer to the actual object (finally). It is not directly embedded in the object store bucket, because objects have no fixed size. The object pointer is followed by three handlers managing destruction, freeing and cloning. Note that in PHP destruction and freeing of objects are distinct steps, with the former being skipped in some cases (“unclean shutdown”). The clone handler is virtually never used. Because these storage handlers are not part of the normal object handlers (for whatever reason) they will be duplicated for every single object, rather than being shared.

These object store handlers are followed by a pointer to the ordinary object handlers. These are stored in case the object is destroyed without a zval being known (which usually stores the handlers).

The bucket also contains a refcount, which is somewhat odd given how in PHP 5 the zval already stores a reference count. Why do we need another? The problem is that while usually zvals are “copied” simply by increasing their refcount, there are also cases where a hard copy occurs, i.e. an entirely new zval is allocated with the same zend_object_value. In this case two distinct zvals end up using the same object store bucket, so it needs to be refcounted as well. This kind of “double refcounting” is one of the inherent issues of the PHP 5 zval implementation. The buffered pointer into the GC root buffer is also duplicated for similar reasons.

Now let’s look at the actual object that the object store points to. For normal userland objects it is defined as follows:

typedef struct _zend_object {
    zend_class_entry *ce;
    HashTable *properties;
    zval **properties_table;
    HashTable *guards;
} zend_object;

The zend_class_entry is a pointer to the class this object is an instance of. The two following members are used for two different ways of storing object properties. For dynamic properties (i.e. ones that are added at runtime and not declared in the class) the properties hashtable is used, which just maps (mangled) property names to their values.

However for declared properties an optimization is used: During compilation every such property is assigned an index and the value of the property is stored at that index in the properties_table. The mapping between property names and their index is stored in a hashtable in the class entry. As such the memory overhead of the hashtable is avoided for individual objects. Furthermore the index of a property is cached polymorphically at runtime.

The guards hashtable is used to implement the recursion behavior of magic methods like __get, which I won’t go into here.

Apart from the double refcounting issue already previously mentioned, the object representation is also heavy on memory usage with 136 bytes for a minimal object with a single property (not counting zvals). Furthermore there is a lot of indirection involved: For example, to fetch a property on an object zval, you first have to fetch the object store bucket, then the zend object, then the properties table and then the zval it points to. As such there are already four levels of indirection at a minimum (and in practice it will be no fewer than seven).

Objects in PHP 7

PHP 7 tries to improve on all of these issues by getting rid of double refcounting, dropping some of the memory bloat and reducing indirection. Here’s the new zend_object structure:

struct _zend_object {
    zend_refcounted   gc;
    uint32_t          handle;
    zend_class_entry *ce;
    const zend_object_handlers *handlers;
    HashTable        *properties;
    zval              properties_table[1];
};

Note that this structure is now (nearly) all that is left of an object: The zend_object_value has been replaced with a direct pointer to the object and the object store, while not entirely gone, is much less significant.

Apart from now including the customary zend_refcounted header, you can see that the handle and the handlers of the object value have been moved into the zend_object. Furthermore the properties_table now also makes use of the struct hack, so the zend_object and the properties table will be allocated in one chunk. And of course, the property table now directly embeds zvals, instead of containing pointers to them.

The guards table is no longer directly present in the object structure. Instead it will be stored in the first properties_table slot if it is needed, i.e. if the object uses __get etc. But if these magic methods are not used, the guards table is elided.

The dtor, free_storage and clone handlers that were previously stored in the object store bucket have now been moved into the handlers table, which starts as follows:

struct _zend_object_handlers {
    /* offset of real object header (usually zero) */
    int                                     offset;
    /* general object functions */
    zend_object_free_obj_t                  free_obj;
    zend_object_dtor_obj_t                  dtor_obj;
    zend_object_clone_obj_t                 clone_obj;
    /* individual object functions */
    // ... rest is about the same in PHP 5
};

At the top of the handler table is an offset member, which is quite clearly not a handler. This offset has to do with how internal objects are represented: An internal object always embeds the standard zend_object, but typically also adds a number of additional members. In PHP 5 this was done by adding them after the standard object:

struct custom_object {
    zend_object std;
    uint32_t something;
    // ...
};

This means that if you get a zend_object* you can simply cast it to your custom struct custom_object*. This is the standard means of implementing structure inheritance in C. However in PHP 7 there is an issue with this particular approach: Because zend_object uses the struct hack for storing the properties table, PHP will be storing properties past the end of zend_object and thus overwriting additional internal members. This is why in PHP 7 additional members are stored before the standard object instead:

struct custom_object {
    uint32_t something;
    // ...
    zend_object std;
};

However this means that it is no longer possible to directly convert between a zend_object* and a struct custom_object* with a simple cast, because both are separated by a offset. This offset is what’s stored in the first member of the object handler table. At compile-time the offset can be determined using the offsetof() macro.

You may wonder why PHP 7 objects still contain a handle. After all, we now directly store a pointer to the zend_object, so we no longer need the handle to look up the object in the object store.

However the handle is still necessary, because the object store still exists, albeit in a significantly reduced form. It is now a simple array of pointers to objects. When an object is created a pointer to it is inserted into the object store at the handle index and removed once the object is freed.

Why do we still need the object store? The reason behind this is that during request shutdown, there comes a point where it is no longer safe to run userland code, because the executor is already partially shut down. To avoid this PHP will run all object destructors at an early point during shutdown and prevent them from running at a later point in time. For this a list of all active objects is needed.

Furthermore the handle is useful for debugging, because it gives each object a unique ID, so it’s easy to see whether two objects are really the same or just have the some content. HHVM still stores an object handle despite not having a concept of an object store.

Comparing with the PHP 5 implementation, we now have only one refcount (as the zval itself no longer has one) and the memory usage is much smaller: We need 40 bytes for the base object and 16 bytes for every declared property, already including its zval. The amount of indirection is also significantly reduced, as many of the intermediate structure were either dropped or embedded. As such reading a property is now only a single level of indirection, rather than four.

Indirect zvals

At this point we have covered all of the normal zval types, however there are a couple of additional special types that are used only in certain circumstances. One that was newly added in PHP 7 is IS_INDIRECT.

An indirect zval signifies that its value is stored in some other location. Note that this is different from the IS_REFERENCE type in that it directly points to another zval, rather than a zend_reference structure that embeds a zval.

To understand under what circumstances this may be necessary, consider how PHP implements variables (though the same also applies to object property storage):

All variables that are known at compile-time are assigned an index and their value will be stored at that index in the compiled variables (CV) table. However PHP also allows you to dynamically reference variables, either by using variable variables or, if you are in global scope, through $GLOBALS. If such an access occurs, PHP will create a symbol table for the function/script, which contains a map from variable names to their values.

This leads to the question: How can both forms of access be supported at the same time? We need table-based CV access for normal variable fetches and symtable-based access for varvars. In PHP 5 the CV table used doubly-indirected zval** pointers. Normally those pointers would point to a second table of zval* pointer that would point to the actual zvals:

+------ CV_ptr_ptr[0]
| +---- CV_ptr_ptr[1]
| | +-- CV_ptr_ptr[2]
| | |
| | +-> CV_ptr[0] --> some zval
| +---> CV_ptr[1] --> some zval
+-----> CV_ptr[2] --> some zval

Now, once a symbol table came into use, the second table with the single zval* pointers was left unused and the zval** pointers were updated to point into the hashtable buckets instead. Here illustrated assuming the three variables are called $a, $b and $c:

CV_ptr_ptr[0] --> SymbolTable["a"].pDataPtr --> some zval
CV_ptr_ptr[1] --> SymbolTable["b"].pDataPtr --> some zval
CV_ptr_ptr[2] --> SymbolTable["c"].pDataPtr --> some zval

In PHP 7 using the same approach is no longer possible, because a pointer into a hashtable bucket will be invalidated when the hashtable is resized. Instead PHP 7 uses the reverse strategy: For the variables that are stored in the CV table, the symbol hashtable will contain an INDIRECT entry pointing to the CV entry. The CV table will not be reallocated for the lifetime of the symbol table, so there is no problem with invalidated pointers.

So if you have a function with CVs $a, $b and $c, as well as a dynamically created variable $d, the symbol table could looks something like this:

SymbolTable["a"].value = INDIRECT --> CV[0] = LONG 42
SymbolTable["b"].value = INDIRECT --> CV[1] = DOUBLE 42.0
SymbolTable["c"].value = INDIRECT --> CV[2] = STRING --> zend_string("42")
SymbolTable["d"].value = ARRAY --> zend_array([4, 2])

Indirect zvals can also point to an IS_UNDEF zval, in which case it is treated as if the hashtable does not contain the associated key. So if unset($a) writes an UNDEF type into CV[0], then this will be treated like the symbol table no longer having a key "a".

Constants and ASTs

There are two more special types IS_CONSTANT and IS_CONSTANT_AST which exist both in PHP 5 and PHP 7 and deserve a mention here. To understand what these do, consider the following example:

function test($a = ANSWER,
              $b = ANSWER * ANSWER) {
    return $a + $b;
}

define('ANSWER', 42);
var_dump(test()); // int(42 + 42 * 42)

The default values for the parameters of the test() function make use of the constant ANSWER - however this constant is not yet defined when the function was declared. The constant will only be available once the define() call has run.

For this reason parameter and property default values, constants and everything else accepting a “static expression” have the ability to postpone evaluation of the expression until first use.

If the value is a constant (or class constant), which is the most common case for late-evaluation, this is signaled using an IS_CONSTANT zval with the constant name. If the value is an expression, a IS_CONSTANT_AST zval pointing to an abstract syntax tree (AST) is used.

And this concludes our walk through the PHP 7 value representation. Two more topics I’d like to write about at some point are some of the optimizations done in the virtual machine, in particular the new calling convention, as well as the improvements that were made to the compiler infrastructure.

News stories from Tuesday 09 June, 2015

Favicon for Ramblings of a web guy 00:45 Apple Says My Screen Is Third Party » Post from Ramblings of a web guy Visit off-site link
I have always had the utmost respect for Apple. Even before I used Macs and before the iPhone came out, I knew they were a top notch company.

I have had five iPhones. I have had 6 or 7 MacBook Pros. My kids have Macs. My kids have iPhones. My parents use iPads. I think a lot of Apple products and service... until today.

We took my daughter's hand me down iPhone 5 in to have the ear piece and top button fixed. It's been in the family the whole time. It was never owned by anyone other than family. Last year, I took it in for the Apple Store Battery Replacement Program. That is the last time anyone had it open. In fact, that may have been the last time it was out of its case. More on this later.

After we dropped off the phone today, we were told it was going to be an hour. No problem, we could kill some time. We came back an hour later and the person brought us the phone out and tells us that they refused to work on it because the screen is a 3rd party part. Whoa! What? I tell her that the only place it was ever worked on was in that exact store. She goes to get a manager. I thought, OK, the Apple customer service I know and love is about to kick in. They are going to realize their mistake and this will all be good. Or, even if they still think it's a 3rd party screen, he will come up with some resolution for the problem. Um, no.

He says the same thing (almost verbatim) to me that the previous person said. I again tell him it has only been opened by them. He offers to take it to the back and have a technician open it up again. He was not really gone long enough for that. He comes back, points at some things on the screen and tells me that is how they know it's a 3rd party part. I again, tell him that only the Apple Store has had it open. His response is a carefully crafted piece of technicality that can only come from lawyers and businessmen. It was along the lines of "At some point, this screen has been replaced with a 3rd party screen. I am not saying you are lying. I am not claiming to know how it was replaced. I am only stating that this is a 3rd party screen." What?

So, OK, what now? I mean, it wasn't under warranty. I did not expect to get a new free phone. I was going to pay to have it fixed. Nope. They won't touch it with a ten foot pole. It has a 3rd party part on it. He claims, that because they base their repair fees on being able to refurbish and reuse the parts they pull off of the phone (the phone I own and paid for by the way), they can't offer to repair a phone with parts they can't refurbish. I can't even pay full price, whatever that is. He never gave me a price to pay for a new screen with no discounts.

At this point, I realized I needed to leave. I was so furious. I was furious it was happening. I was furious that the manager had no solution for me. I was furious that he was speaking in legalese.

Just to be clear, I could buy my daughter a new iPhone 6. I am not trying to get something for nothing. I just wanted the phone to work again. One of the things I love about Apple products is how well they hold up. Sure, you have to have some work done on them sometimes. Batteries go bad. Buttons quit working. But, let's be real. My daughter uses this thing for hours a day. I have the data bill to prove it. So, I like that I can have an Apple product repaired when it breaks and it gets a longer life. The alternative is to throw it away.

How did I end up here? I can only come up with one scenario. And the thought that this is what happened upsets me even more. When we took it for the battery replacement last year, they kept it longer than their initial estimate. And the store was dead that day. When they brought it out, the case would not fit on the bottom of the phone. It was like the screen was not on all the way. The person took it back to the back again. They came out later and it seemed to work fine. And I was fine with all of this because it's Apple. I trust(ed) Apple. But, what if, they broke the screen? What if the tech that broke it was used a screen from some returned phone that did have a third party part and no one caught it? Or what if, Apple was knowingly using third party parts?

If I had not just had the battery replaced last year, I would think maybe there was some shenanigans in the shipping when the phone was new. We bought this phone brand new when the iPhone 5 came out. It would not come as a surprise if some devices had been intercepted and taken apart along the shipping lines. Or even in production. But, we just had it serviced at the Apple Store last year. They had no problem with the screen then other than the one they caused when they had to put it back together a second time.

This all sounds too far fetched right? Sadly, there seems to be a trend of Apple denying service to people. All of these people can't be lying. They can't all be out to get one over on Apple.



While waiting for our appointment, I overheard an Apple Genius telling a woman she "may" have had water damage. She didn't tell her she did. She did not claim the woman was lying. She thought she "may" have water damage. I don't know if she did or not. What struck me was the way she told her she "thought it could be" water damage. She told her she had seen lots of bad screens, but none of them (really? not one single screen?) had vertical lines in it like this. It's like she was setting her up to come back later and say "Darn, the tech says it is water damage." Sadly, I find myself doubting that conversation now. It makes me want to take a phone in with horizontal lines and see if I get the same story.

Of course, I know what many, many people will say to this. You will say that if I am really this upset, I should not buy anymore Apple products. And you are right. That is the American way. The free market is the way to get to companies. The thing is, if I bought a Samsung Galaxy, where would I get it fixed? Would my experience be any better? There is not Samsung store. There are no Authorized Samsung repair facilities. So, what would that get me? A disposable phone? Maybe that is what Apple wants. Maybe that is their goal. Deny service to people in hopes it will lead to more sales and less long term use of their devices.

And you know what makes this all even more crappy? One of the reasons he says he knows it is a third party screen is that the home button is lose. It wasn't lose when we brought it in! I was using the phone myself to make sure a back up was done just before we handed it over to the Apple Store. They did that when they opened the screen and decided it was a third pary part. So, now, my daughter's phone not only has no working ear piece and a top button that works only some of the time. Now, her home button spins around. Sigh.

News stories from Monday 18 May, 2015

Favicon for ircmaxell's blog 15:30 Prefix Trees and Parsers » Post from ircmaxell's blog Visit off-site link
In my last post, Tries and Lexers, I talked about an experiment I was doing related to parsing of JavaScript code. By the end of the post I had shifted to wanting to build a HTTP router using the techniques that I learned. Let's continue where we left off...

Read more »
Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 15 May, 2015

Favicon for ircmaxell's blog 17:00 Tries and Lexers » Post from ircmaxell's blog Visit off-site link
Lately I have been playing around with a few experimental projects. The current one started when I tried to make a templating engine. Not just an ordinary one, but one that understood the context of a variable so it could encode/escape it properly. Imagine being able to put a variable in a JavaScript string in your template, and have the engine transparently encode it correctly for you. Awesome, right? Well, while doing it, I went down a rabbit hole. And it led to something far more awesome.

Read more »
Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 05 May, 2015

Favicon for nikic's Blog 01:00 Internal value representation in PHP 7 - Part 1 » Post from nikic's Blog Visit off-site link

My last article described the improvements to the hashtable implementation that were introduced in PHP 7. This followup will take a look at the new representation of PHP values in general.

Due to the amount of material to cover, the article is split in two parts: This part will describe how the zval (Zend value) implementation differs between PHP 5 and PHP 7, and also discuss the implementation of references. The second part will investigate the realization of individual types like strings or objects in more detail.

Zvals in PHP 5

In PHP 5 the zval struct is defined as follows:

typedef struct _zval_struct {
    zvalue_value value;
    zend_uint refcount__gc;
    zend_uchar type;
    zend_uchar is_ref__gc;
} zval;

As you can see, a zval consists of a value, a type and some additional __gc information, which we’ll talk about in a moment. The value member is a union of different possible values that a zval can store:

typedef union _zvalue_value {
    long lval;                 // For booleans, integers and resources
    double dval;               // For floating point numbers
    struct {                   // For strings
        char *val;
        int len;
    } str;
    HashTable *ht;             // For arrays
    zend_object_value obj;     // For objects
    zend_ast *ast;             // For constant expressions
} zvalue_value;

A C union is a structure in which only one member can be active at a time and those size matches the size of its largest member. All members of the union will be stored in the same place in memory and will be interpreted differently depending on which one you access. If you read the lval member of the above union, its value will be interpreted as a signed integer. If you read the dval member the value will be interpreted as a double-precision floating point number instead. And so on.

To figure out which of these union members is currently in use, the type property of a zval stores a type tag, which is simply an integer:

#define IS_NULL     0      /* Doesn't use value */
#define IS_LONG     1      /* Uses lval */
#define IS_DOUBLE   2      /* Uses dval */
#define IS_BOOL     3      /* Uses lval with values 0 and 1 */
#define IS_ARRAY    4      /* Uses ht */
#define IS_OBJECT   5      /* Uses obj */
#define IS_STRING   6      /* Uses str */
#define IS_RESOURCE 7      /* Uses lval, which is the resource ID */
/* Special types used for late-binding of constants */
#define IS_CONSTANT 8
#define IS_CONSTANT_AST 9

Reference counting in PHP 5

Zvals in PHP 5 are (with a few exceptions) allocated on the heap and PHP needs some way to keep track which zvals are currently in use and which should be freed. For this purpose reference counting is employed: The refcount__gc member of the zval structure stores how often a zval is currently “referenced”. For example in $a = $b = 42 the value 42 is referenced by two variables, so its refcount is 2. If the refcount reaches zero, it means a value is unused and can be freed.

Note that the references that the refcount refers to (how many times a value is currently used) have nothing to do with PHP references (using &). I will always using the terms “reference” and “PHP reference” to disambiguate both concepts in the following. For now we’ll ignore PHP references altogether.

A concept that is closely related to reference counting is “copy on write”: A zval can only be shared between multiple users as long as it isn’t modified. In order to change a shared zval it needs to be duplicated (“separated”) and the modification will happen only on the duplicated zval.

Lets look at an example that shows off both copy-on-write and zval destruction:

$a = 42;   // $a         -> zval_1(type=IS_LONG, value=42, refcount=1)
$b = $a;   // $a, $b     -> zval_1(type=IS_LONG, value=42, refcount=2)
$c = $b;   // $a, $b, $c -> zval_1(type=IS_LONG, value=42, refcount=3)

// The following line causes a zval separation
$a += 1;   // $b, $c -> zval_1(type=IS_LONG, value=42, refcount=2)
           // $a     -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($b); // $c -> zval_1(type=IS_LONG, value=42, refcount=1)
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($c); // zval_1 is destroyed, because refcount=0
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

Reference counting has one fatal flaw: It is not able to detect and release cyclic references. To handle this PHP uses an additional cycle collector. Whenever the refcount of a zval is decremented and there is a chance that this zval is part of a cycle, the zval is written into a “root buffer”. Once this root buffer is full, potential cycles will be collected using a mark and sweep garbage collection.

In order to support this additional cycle collector, the actually used zval structure is the following:

typedef struct _zval_gc_info {
    zval z;
    union {
        gc_root_buffer       *buffered;
        struct _zval_gc_info *next;
    } u;
} zval_gc_info;

The zval_gc_info structure embeds the normal zval, as well as one additional pointer - note that u is a union, so this is really just one pointer with two different types it may point to. The buffered pointer is used to store where in the root buffer this zval is referenced, so that it may be removed from it if it’s destroyed before the cycle collector runs (which is very likely). next is used when the collector destroys values, but I won’t go into that here.

Motivation for change

Let’s talk about sizes a bit (all sizes are for 64-bit systems): First of all, the zvalue_value union is 16 bytes large, because both the str and obj members have that size. The whole zval struct is 24 bytes (due to padding) and zval_gc_info is 32 bytes. On top of this, allocating the zval on the heap adds another 16 bytes of allocation overhead. So we end up using 48 bytes per zval - although this zval may be used by multiple places.

At this point we can start thinking about the (many) ways in which this zval implementation is inefficient. Consider the simple case of a zval storing an integer, which by itself is 8 bytes. Additionally the type-tag needs to be stored in any case, which is a single byte by itself, but due to padding needs another 8 bytes.

To these 16 bytes that we really “need” (in first approximation), we add another 16 bytes handling reference counting and cycle collection and another 16 bytes of allocation overhead. Not to mention that we actually have to perform that allocation and the subsequent free, both being quite expensive operations.

This raises the question: Does a simple integer value really need to be stored as a reference-counted, cycle-collectible, heap-allocated value? The answer to this question is of course, no, this doesn’t make sense.

Here is a summary of the primary problems with the PHP 5 zval implementation:

  • Zvals (nearly) always require a heap allocation.
  • Zvals are always reference counted and always have cycle collection information, even in cases where sharing the value is not worthwhile (an integer) and it can’t form cycles.
  • Directly refcounting the zvals leads to double refcounting in the case of objects and resources. The reasons behind this will be explained in the next part.
  • Some cases involve quite an awesome amount of indirection. For example to access the object stored in a variable, a total of four pointers need to be dereferenced (which means following a pointer chain of length four). Once again this will be discussed in the next part.
  • Directly refcounting the zvals also means that values can only be shared between zvals. For example it’s not possible to share a string between a zval and hashtable key (without storing the hashtable key as a zval as well).

Zvals in PHP 7

And this brings us to the new zval implementation in PHP 7. The fundamental change that was implemented, is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, arrays or objects) will store the refcount themselves. This has the following advantages:

  • Simple values do not require allocation and don’t use refcounting.
  • There is no more double refcounting. In the object case, only the refcount in the object is used now.
  • Because the refcount is now stored in the value itself, the value can be shared independently of the zval structure. A string can be used both in a zval and a hashtable key.
  • There is a lot less indirection, i.e. the number of pointers you need to follow to get to a value is lower.

Now lets take a look at how the new zval is defined:

struct _zval_struct {
    zend_value value;
    union {
        struct {
            ZEND_ENDIAN_LOHI_4(
                zend_uchar type,
                zend_uchar type_flags,
                zend_uchar const_flags,
                zend_uchar reserved)
        } v;
        uint32_t type_info;
    } u1;
    union {
        uint32_t var_flags;
        uint32_t next;                 // hash collision chain
        uint32_t cache_slot;           // literal cache slot
        uint32_t lineno;               // line number (for ast nodes)
        uint32_t num_args;             // arguments number for EX(This)
        uint32_t fe_pos;               // foreach position
        uint32_t fe_iter_idx;          // foreach iterator index
    } u2;
};

The first member stays pretty similar, this is still a value union. The second member is an integer storing type information, which is further subdivided into individual bytes using a union (you can ignore the ZEND_ENDIAN_LOHI_4 macro, which just ensures a consistent layout across platforms with different endianness). The important parts of this substructure are the type (which is similar to what it was before) and the type_flags, which I’ll explain in a moment.

At this point there exists a small problem: The value member is 8 bytes large and due to struct padding adding even a single byte to that grows the zval size to 16 bytes. However we obviously don’t need 8 bytes just to store a type. This is why the zval contains the additional u2 union, which remains unused by default, but can be repurposed by the surrounding code to store 4 bytes of data. The different union members correspond to different usages of this extra data slot.

The value union looks slightly different in PHP 7:

typedef union _zend_value {
    zend_long         lval;
    double            dval;
    zend_refcounted  *counted;
    zend_string      *str;
    zend_array       *arr;
    zend_object      *obj;
    zend_resource    *res;
    zend_reference   *ref;
    zend_ast_ref     *ast;

    // Ignore these for now, they are special
    zval             *zv;
    void             *ptr;
    zend_class_entry *ce;
    zend_function    *func;
    struct {
        ZEND_ENDIAN_LOHI(
            uint32_t w1,
            uint32_t w2)
    } ww;
} zend_value;

First of all, note that the value union is now 8 bytes instead of 16. It will only store integers (lval) and doubles (dval) directly, everything else is a pointer. All the pointer types (apart from those marked as special above) use refcounting and have a common header defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
            ZEND_ENDIAN_LOHI_3(
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;
};

Of course the structure contains a refcount. Additionally it contains a type, some flags and gc_info. The type just duplicates the zval type and allows the GC to distinguish different refcounted structures without storing a zval. The flags are used for different purposes with different types and will be explained for each type separately in the next part.

The gc_info is the equivalent of the buffered entry in the old zvals. However instead of storing a pointer into the root buffer it now contains an index into it. Because the root buffer has a fixed size (10000 elements) it is enough to use a 16 bit number for this instead of a 64 bit pointer. The gc_info info also encodes the “color” of the node, which is used to mark nodes during collection.

Zval memory management

I’ve mentioned that zvals are no longer individually heap-allocated. However they obviously still need to be stored somewhere, so how does this work? While zvals are still mostly part of heap-allocated structures, they are directly embedded into them. E.g. a hashtable bucket will directly embed a zval instead of storing a pointer to a separate zval. The compiled variables table of a function or the property table of an object will be zval arrays that are allocated in one chunk, instead of storing pointers to separate zvals. As such zvals are now usually stored with one level of indirection less. What was previously a zval* is now a zval.

When a zval is used in a new place, previously this meant copying a zval* and incrementing its refcount. Now it means copying the contents of a zval (ignoring u2) instead and maybe incrementing the refcount of the value it points to, if said value uses refcounting.

How does PHP know whether a value is refcounted? This cannot be determined solely based on the type, because some types like strings and arrays are not always refcounted. Instead one bit of the zvals type_info member determines whether or not the zval is refcounted. There are a number of other bits encoding properties of the type:

#define IS_TYPE_CONSTANT            (1<<0)   /* special */
#define IS_TYPE_IMMUTABLE           (1<<1)   /* special */
#define IS_TYPE_REFCOUNTED          (1<<2)
#define IS_TYPE_COLLECTABLE         (1<<3)
#define IS_TYPE_COPYABLE            (1<<4)
#define IS_TYPE_SYMBOLTABLE         (1<<5)   /* special */

The three primary properties a type can have are “refcounted”, “collectable” and “copyable”. You already know what refcounted means. Collectable means that the zval can participate in a cycle. E.g. strings are (often) refcounted, but there’s no way you can create a cycle with a string in it.

Copyability determines whether the value needs to copied when a “duplication” is performed. A duplication is a hard copy, e.g. if you duplicate a zval that points to an array, this will not simply increase the refcount on the array. Instead a new and independent copy of the array will be created. However for some types like objects and resources even a duplication should only increment the refcount - such types are called non-copyable. This matches the passing semantics of objects and resources (which are, for the record, not passed by reference).

The following table shows the different types and what type flags they use. “Simple types” refers to types like integers or booleans that don’t use a pointer to a separate structure. A column for the “immutable” flag is also present, which is used to mark immutable arrays and will be discussed in more detail in the next part.

                | refcounted | collectable | copyable | immutable
----------------+------------+-------------+----------+----------
simple types    |            |             |          |
string          |      x     |             |     x    |
interned string |            |             |          |
array           |      x     |      x      |     x    |
immutable array |            |             |          |     x
object          |      x     |      x      |          |
resource        |      x     |             |          |
reference       |      x     |             |          |

At this point, lets take a look at two examples of how the zval management works in practice. First, an example using integers based off the PHP 5 example from above:

$a = 42;   // $a = zval_1(type=IS_LONG, value=42)

$b = $a;   // $a = zval_1(type=IS_LONG, value=42)
           // $b = zval_2(type=IS_LONG, value=42)

$a += 1;   // $a = zval_1(type=IS_LONG, value=43)
           // $b = zval_2(type=IS_LONG, value=42)

unset($a); // $a = zval_1(type=IS_UNDEF)
           // $b = zval_2(type=IS_LONG, value=42)

This is pretty boring. As integers are no longer shared, both variables will use separate zvals. Don’t forget that these are now embedded rather than allocated, which I try to signify by writing = instead of a -> pointer. Unsetting a variable will set the type of the corresponding zval to IS_UNDEF. Now consider a more interesting case where a complex value is involved:

$a = [];   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

$b = $a;   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=2, value=[])
           // $b = zval_2(type=IS_ARRAY) ---^

// Zval separation occurs here
$a[] = 1   // $a = zval_1(type=IS_ARRAY) -> zend_array_2(refcount=1, value=[1])
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

unset($a); // $a = zval_1(type=IS_UNDEF) and zend_array_2 is destroyed
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

Here each variable still has a separate (embedded) zval, but both zvals point to the same (refcounted) zend_array structure. Once a modification is done the array needs to be duplicated. This case is similar to how things work in PHP 5.

Types

Lets take a closer look at what types are supported in PHP 7:

// regular data types
#define IS_UNDEF                    0
#define IS_NULL                     1
#define IS_FALSE                    2
#define IS_TRUE                     3
#define IS_LONG                     4
#define IS_DOUBLE                   5
#define IS_STRING                   6
#define IS_ARRAY                    7
#define IS_OBJECT                   8
#define IS_RESOURCE                 9
#define IS_REFERENCE                10

// constant expressions
#define IS_CONSTANT                 11
#define IS_CONSTANT_AST             12

// internal types
#define IS_INDIRECT                 15
#define IS_PTR                      17

This list is quite similar to what was used in PHP 5, however there are a few additions:

  • The IS_UNDEF type is used in places where previously a NULL zval pointer (not to be confused with an IS_NULL zval) was used. For example, in the refcounting examples above the IS_UNDEF type is set for variables that have been unset.
  • The IS_BOOL type has been split into IS_FALSE and IS_TRUE. As such the value of the boolean is now encoded in the type, which allows the optimization of a number of type-based checks. This change is transparent to userland, where this is still a single “boolean” type.
  • PHP references no longer use an is_ref flag on the zval and use a new IS_REFERENCE type instead. How this works will be described in the next section.
  • The IS_INDIRECT and IS_PTR types are special internal types.

The IS_LONG type now uses a zend_long value instead of an ordinary C long. The reason behind this is that on 64-bit Windows (LLP64) a long is only 32-bit wide, so PHP 5 ended up always using 32-bit numbers on Windows. PHP 7 will allow you to use 64-bit numbers if you’re on an 64-bit operating system, even if that operating system is Windows.

Details of the individual zend_refcounted types will be discussed in the next part. For now we’ll only look at the implementation of PHP references.

References

PHP 7 uses an entirely different approach to handling PHP & references than PHP 5 (and I can tell you that this change is one of the largest source of bugs in PHP 7). Lets start by taking a look at how PHP references used to work in PHP 5:

Normally, the copy-on-write principle says that before modifying a zval it needs to be separated, in order to make sure you don’t end up changing the value for every place sharing the zval. This matches by-value passing semantics.

For PHP references this does not apply. If a value is a PHP reference, you want it to change for every user of the value. The is_ref flag that was part of PHP 5 zvals determined whether a value is a PHP reference and as such whether it required separation before modification. An example:

$a = [];  // $a     -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b =& $a; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[])

$b[] = 1; // $a = $b = zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[1])

One significant problem with this design is that it’s not possible to share a value between a variable that’s a PHP reference and one that isn’t. Consider the following example:

$a = [];  // $a         -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b = $a;  // $a, $b     -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
$c = $b   // $a, $b, $c -> zval_1(type=IS_ARRAY, refcount=3, is_ref=0) -> HashTable_1(value=[])

$d =& $c; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[])
          // $d is a reference of $c, but *not* of $a and $b, so the zval needs to be copied
          // here. Now we have the same zval once with is_ref=0 and once with is_ref=1.

$d[] = 1; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[1])
          // Because there are two separate zvals $d[] = 1 does not modify $a and $b.

This behavior of references is one of the reasons why using references in PHP will usually end up being slower than using normal values. To give a less-contrived example where this is a problem:

$array = range(0, 1000000);
$ref =& $array;
var_dump(count($array)); // <-- separation occurs here

Because count() accepts its value by-value, but $array is a PHP reference, a full copy of the array is done before passing it off to count(). If $array weren’t a reference, the value would be shared instead.

Now, let’s switch to the PHP 7 implementation of PHP references. Because zvals are no longer individually allocated, it is not possible to use the same approach that PHP 5 used. Instead a new IS_REFERENCE type is added, which uses the zend_reference structure as its value:

struct _zend_reference {
    zend_refcounted   gc;
    zval              val;
};

So essentially a zend_reference is simply a refcounted zval. All variables in a reference set will store a zval with type IS_REFERENCE pointing to the same zend_reference instance. The val zval behaves like any other zval, in particular it is possible to share a complex value it points to. E.g. an array can be shared between a variable that is a reference and another that is a value.

Lets go through the above code samples again, this time looking at the PHP 7 semantics. For the sake of brevity I will stop writing the individual zvals of the variables and only show what structure they point to.

$a = [];  // $a                                     -> zend_array_1(refcount=1, value=[])
$b =& $a; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[])

$b[] = 1; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[1])

The by-reference assignment created a new zend_reference. Note that the refcount is 2 on the reference (because two variables are part of the PHP reference set), but the value itself only has a refcount of 1 (because one zend_reference structure points to it). Now consider the case where references and non-references are mixed:

$a = [];  // $a         -> zend_array_1(refcount=1, value=[])
$b = $a;  // $a, $b,    -> zend_array_1(refcount=2, value=[])
$c = $b   // $a, $b, $c -> zend_array_1(refcount=3, value=[])

$d =& $c; // $a, $b                                 -> zend_array_1(refcount=3, value=[])
          // $c, $d -> zend_reference_1(refcount=2) ---^
          // Note that all variables share the same zend_array, even though some are
          // PHP references and some aren't.

$d[] = 1; // $a, $b                                 -> zend_array_1(refcount=2, value=[])
          // $c, $d -> zend_reference_1(refcount=2) -> zend_array_2(refcount=1, value=[1])
          // Only at this point, once an assignment occurs, the zend_array is duplicated.

The important difference to PHP 5 is that all variables were able to share the same array, even though some were PHP references and some weren’t. Only once some kind of modification is performed the array will be separated. This means that in PHP 7 it’s safe to pass a large, referenced array to count(), it is not going to be duplicated. References will still be slower than normal values, because they require allocation of the zend_reference structure (and indirection through it) and are usually not handled in the fast-path of engine code.

Wrapping up

To summarize, the primary change that was implemented in PHP 7 is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, array or objects) will store the refcount themselves. This usually leads to less allocations, less indirection and less memory usage.

In the second part of this article the remaining complex types will be discussed.

News stories from Tuesday 14 April, 2015

Favicon for Fabien Potencier 23:00 Blackfire, a new Profiler for PHP Developers » Post from Fabien Potencier Visit off-site link

blackfire_primary_square.png

I've always been fascinated by debugging tools; tools that help you understand what's going on in your code. In the Symfony world, the web debug toolbar and the web profiler are tools that gives a lot of information about HTTP request/response pairs (from exceptions to logs, submitted forms and even an event timeline), but it's only available in development mode as enabling those features in production would have a too significant performance impact. The Symfony profiler is also more about giving metadata about the code execution and less about what is executed.

If you want to understand which part of your code is executed for any given request, and where the server resources are spent, you need special tools; tools that instrument your code at the C level. The oldest tool able to do that is XDebug and a few years ago, Facebook also open-sourced XHProf. Both XDebug (as a profiler) and XHProf are profilers; they are able to answer a lot of questions you might have about the performance of your code, and they can help you understand why your code is slow.

But even if tools are available, performance monitoring in the PHP world is not that widespread. You are probably writing unit tests for your applications to ensure that you don't accidentally deploy broken features and to avoid regressions when you are fixing bugs. But what about performance? A broken page is a problem, but what about a page that takes seconds to display? Less performance means less business. So, continuously testing the performance of your applications should be a critical part of your development workflow.

Enter Blackfire. Blackfire is a PHP profiler that simplifies the profiling of an app as much as possible.

The first big difference with existing tools is the installation process; we've made it straightforward by providing easy-to-follow instructions for a lot of different platforms and Blackfire is even included by default on some major PHP cloud providers.

Once installed, profiling an HTTP request is as easy as it can get: use the Google Chrome extension to profile web pages from your browser, or use the command line tool to profile web services, APIs, PHP CLI scripts, or even long-running scripts like daemons or workers.

The other major difference with the other existing tools comes from the fact that Blackfire is a SaaS product. It let us do a lot of things that would not be possible otherwise like storing the history of your profiles, making comparisons between two profiles really easy or providing a rich and interactive UI that evolves on a day-to-day basis.

If you've used XHProf in the past, you might wonder if it would make sense for you to upgrade to Blackfire. First, and unlike a popular belief, the current Blackfire PHP extension is not based on the XHProf code anymore. Starting from scratch helped us lower the overhead and structure the code for extensibility.

Then, and besides the "better experience", Blackfire offers some unique features like:

  • Profile your applications without changing a single line of code;
  • Easily focus on code you need to optimize thanks to more accurate results, aggregation, and smart cleaning of data;
  • More information about CPU time and I/O time;
  • No performance impact on the production servers when not using the profiler;
  • SQL statements and HTTP calls extraction;
  • Team profiling;
  • Profile sharing
  • an API;
  • Garbage collector information;
  • The soon-to-be-announced Windows support;
  • And much more...

We are very active on our blog where you can learn more about the great features we are providing for developers and companies.

Blackfire has been in public beta for four months now and the response has been amazing so far. More than 20.000 developers have already signed up. You can read some user feedback on our Twitter account, and some of them even wrote about their experience on the Blackfire blog: I recommend the article from ownCloud as they did a lot of performance tweaks to make their code run faster thanks to Blackfire.

My mission with Blackfire is to give developers the best possible profiler for their applications. Try it out today for free and tell me what you think!

News stories from Wednesday 01 April, 2015

Favicon for Grumpy Gamer 08:00 Once Again... » Post from Grumpy Gamer Visit off-site link

In what's become a global internet tradition that will be passed down for generations to come...

Grumpy Gamer is 100% April Fools' joke free because April Fools' Day is a stupid fucking tradition.  There.  I said what everyone is thinking.


News stories from Tuesday 24 March, 2015

Favicon for ircmaxell's blog 16:00 Thoughts On The Design Of APIs » Post from ircmaxell's blog Visit off-site link
Developers as a whole suck at API design. We don't suck at making APIs. We don't suck at implementing them. We don't suck at using them (well, some more than others). But we do suck at designing them. In fact, we suck so much that we've made entire disciplines around trying to design better ones (BDD, DDD, TDD, etc). There are lots of reasons for this, but there are a few that I really want to focus on.

Read more »
Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 20 March, 2015

Favicon for Web Mozarts 10:05 Managing Web Assets with Puli » Post from Web Mozarts Visit off-site link

Yesterday marked the release of the last beta version of Puli 1.0. Puli is now feature-complete and ready for you to try. The documentation has been updated and contains all the information that you need to get started. My current plan is to publish a Release Candidate by the end of the month and a first stable release at the end of April.

The most important addition since the last beta release is Puli’s new Asset Plugin. Today, I’d like to show you how this plugin helps to manage the web assets of your project and your installed Composer packages independent of any specific PHP framework.

What is Puli?

You never heard of Puli before? In a nutshell, Puli is a resource manager built on top of Composer. Just like Composer generates an autoloader for the classes in your Composer packages, Puli generates a resource repository that contains all files that are not PHP classes (images, CSS, XML, YAML, HTML, you name it). You can access these resources by simple paths prefixed with the name of the package:

echo $twig->render('/acme/blog/views/footer.html.twig');

The only exceptions are end-user applications, which have the prefix /app by convention:

echo $twig->render('/app/views/index.html.twig');

Read Puli at a Glance to get a better high-level view of Puli’s features.

Update 2015/04/06

This post was updated in order to reflect that Puli’s Web Resource Plugin was renamed to “Asset Plugin”.

Web Assets

Some resources – such as templates or configuration files – are needed by the web server only. Others – like CSS files and images – need to be placed in a public directory, where browsers can download them. I’ll call these files web assets here.

Puli’s Asset Plugin takes care of two things:

  • installing web assets in their public location;
  • generating the URLs for these assets.

The public location for installing assets is called an install target in Puli’s language. Puli supports virtually any kind of install target, such as:

  • the document root of your own web server
  • the document root of another web server
  • a Content Delivery Network (CDN)

Install targets store three pieces of information:

  • their location (a directory path, a URL, …)
  • the used installer (symlink, copy, ftp, rsync, …)
  • their URL format

The URL format is used to generate URLs for the assets installed in the target. The default format is /%s, but you could set it to more elaborate values such as http://cdn.example.com/path/%s?v3.

Creating an Install Target

Let me walk you through a simple example of using the plugin for a typical project. We will work with the following setup:

  • the application’s assets are stored in the Puli path /app/public
  • the assets of the “acme/blog” package are stored in /acme/blog/public
  • all assets should be installed in the directory public_html

Before we can start, we need to install the plugin with Composer:

$ composer require puli/asset-plugin:~1.0

Make sure “minimum-stability” is set to “dev” in your composer.json file:

{
    "minimum-stability": "dev"
}

Activate the plugin with Puli’s Command Line Interface (CLI):

$ puli plugin install Puli\\AssetPlugin\\Api\\AssetPlugin

The plugin is loaded successfully if the command puli target succeeds:

$ puli target
No install targets. Use "puli target add <name> <directory>" to add a target.

Let’s create a target named “local” now that points to the aforementioned public_html directory:

$ puli target add local public_html

Run puli target again to see the target that you just added:

Result of the command "puli target"

Installing Web Assets

With the install target ready, we can now map resources to the target:

$ puli asset map /app/public /
$ puli asset map /acme/blog/public /blog

Let’s run puli asset to see the mappings we added:

The output of this command gives us a lot of information:

  • We added our assets to the default target, i.e. our only target “local”. In some cases, it is useful to have more than one install target.
  • The assets in /app/public will be installed in public_html.
  • The assets in /acme/blog/public will be installed in public_html/blog.

All that is left to do is installing the assets:

You should be able to access your assets in the browser now.

Generating Resource URLs

Now that our assets are publicly available, our application needs to generate their proper URLs. If you use Twig, you can use the asset_url() function of Puli’s Twig Extension to do that:

<!-- /images/header.png -->
<img src="{{ asset_url('/app/public/images/header.png') }}" />

The function accepts absolute Puli paths or paths relative to the Puli path of your template:

<img src="{{ asset_url('../images/header.png') }}" />

If you need to generate URLs in PHP code, you can use Puli’s AssetUrlGenerator. Add the following setup code to your bootstrap file or your Dependency Injection Container:

// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repository = $factory->createRepository();
$discovery = $factory->createDiscovery($repository);
 
// URL Generator setup
$urlGenerator = $factory->createUrlGenerator($discovery);

Asset URLs can be generated with the generateUrl() method of the URL generator:

// /images/header.png
$urlGenerator->generateUrl('/app/public/images/header.png');

Read the Web Assets guide in the Puli Documentation if you want to learn more about handling web assets with Puli.

The Future of Packages in PHP

With Puli and especially with Puli’s Asset Plugin, we have exciting new possibilities of creating Composer packages that work with different frameworks at the same time. Basically, a bundle/plugin/module/… of the framework of your choice is reduced to:

  • PHP code, which is autoloaded by Composer’s autoloader.
  • Resource files that are managed and published by Puli.
  • A thin layer of configuration files/code for integrating your Package with a framework of your choice.

Since the framework-dependent code is reduced to a few configuration files or classes, it is possible to add support for multiple frameworks at the same time. For open-source developers, that’s a great thing, because they have to maintain much less packages and code than they had to before. For users of open-source software, that’s a great thing too, because it becomes possible to use the magnificent package X with your framework Y, even though X was sadly developed for framework Z. I think that’s exciting. Do you?

Let me know what you think in the comments. Read the Web Assets guide in the Puli Documentation if you want to learn more about the plugin.

News stories from Monday 16 March, 2015

Favicon for ircmaxell's blog 20:30 Dimensional Analysis » Post from ircmaxell's blog Visit off-site link
There's one skill that I learned in College that I wish everyone would learn. I wish it was taught to everyone in elementary school, it's that useful. It's also deceptively simple. So without any more introduction, let's talk about Dimensional Analysis:

Read more »
Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Thursday 12 March, 2015

Favicon for ircmaxell's blog 20:00 Security Issue: Combining Bcrypt With Other Hash Functions » Post from ircmaxell's blog Visit off-site link
The other day, I was directed at an interesting question on StackOverflow asking if password_verify() was safe against DoS attacks using extremely long passwords. Many hashing algorithms depend on the amount of data fed into them, which affects their runtime. This can lead to a DoS attack where an attacker can provide an exceedingly long password and tie up computer resources. It's a really good question to ask of Bcrypt (and password_hash). As you may know, Bcrypt is limited to 72 character passwords. So on the surface it looks like it shouldn't be vulnerable. But I chose to dig in further to be sure. What I found surprised me.

Read more »
Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 10 March, 2015

Favicon for Ramblings of a web guy 00:04 Using socket_connect with a timeout » Post from Ramblings of a web guy Visit off-site link
TL;DR

I was having trouble with socket connections timing out reliably. Sometimes, my timeout would be reached. Other times, the connect would fail after three to six seconds. I finally figured out it had to do with trying to connect to a routable, non-localhost address. This function is what I finally ended up with that reliably connects to a working server, fails quickly for a server that has an address/port that is not reachable and will reach the timeout for routable addresses that are not up.

I have put a version of my final function into a Gist on Github. I hope someone finds it useful.

Full Story

So, it seems that when you try and connect to an IP that is routable on the network, but not answering, the TCP stack has some built in timeouts that are not obvious. This differs from trying to connect to an IP address that is up, but not listening on a given port. We took a Gearman server down for maintenance and I noticed our warning logs were showing a 3 to 7 second delay between the attempt to queue jobs and the warning log. The timeout we had set was only 100ms. So, this seemed odd.

After a lot of messing around, a coworker pointed out that in production, the failures were happening for an IP that was routable on the network, but that had no host listening on the IP. I had been using localhost and some foreign port for my "failed" server. After using an IP that was local to our LAN but had no host listening on the IP, I was able to recreate it on a dev server. I figured out that if you set the send and receive timeouts really low before calling connect, you can loop while calling connect. You check the error state and timeout. As long as the error is an acceptable one and the timeout is not reached, keep trying until it connects. It works like a charm.

I found several similar examples to this on the web. However, none of them mixed all these techniques.

You can simply set the send and receive timeouts to your actual timeout and it will return quicker. However, the timeouts apply to the packets. And there are retry rules in place. So, I found that a 100ms timeout for each send and receive would wind up taking 500ms or so to actually fail. This was not what I wanted. I wanted more control. So, I set a 100 microsecond timeout during connect. This makes socket_connect return quickly. As long as the socket error is 115 (in progress) or 114 (already trying), we keep calling it. Unless of course our timeout is reached. Then we fail.

It works really well. Should help for doing server maintenance on our Gearman servers.

News stories from Saturday 21 February, 2015

Favicon for Grumpy Gamer 03:10 Thimbleweed Park Dev Blog » Post from Grumpy Gamer Visit off-site link

If you're wondering why it's so quiet over here at Grumpy Gamer, rest assured, it has nothing to do with me not being grumpy anymore.

The mystery can be solved by heading on over to the Thimbleweed Park Dev Blog and following fun antics of making a game.

News stories from Wednesday 11 February, 2015

Favicon for ircmaxell's blog 19:00 Scalar Types and PHP » Post from ircmaxell's blog Visit off-site link
There's currently a proposal that's under vote to add Scalar Typing to PHP (it has since been withdrawn). It's been a fairly controversial RFC, but at this point in time it's currently passing with 67.8% of votes. If you want a simplified breakdown of the proposal, check out Pascal Martin's excellent post about it. What I want to talk about is more of an opinion. Why I believe this is the correct approach to the problem.

I have now forked the original proposal and will be bringing it to a vote shortly.
Read more »
Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 03 February, 2015

Favicon for Ramblings of a web guy 04:02 Most epic ticket of the day » Post from Ramblings of a web guy Visit off-site link
UPDATE: I should clarify. This ticket is an internal ticket at DealNews. It is about what the defaults on our servers should be. It is not about what the defaults should be in MySQL. The frustration that UTF8 support in MySQL is only 3 bytes is quite real.

 This epic ticket of the day is brought to you by Joe Hopkinson.

#7940: Default charset should be utf8mb4
------------------------------------------------------------------------
 The RFC for UTF-8 states, AND I QUOTE:

 > In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
 accessible range) are encoded using sequences of 1 to 4 octets.

 What's that? You don't believe me?! Well, you can read it for yourself
 here!

 What is an octet, you ask? It's a unit of digital information in computing
 and telecommunications that consists of eight bits. (Hence, __oct__et.)

 "So what?", said the neck bearded MySQL developer dressed as Neo from the
 Matrix, as he smuggly quaffed a Surge and settled down to play Virtua
 Fighter 4 on his dusty PS2.

 So, if you recall from your Pre-Intro to Programming, 8 bits = 1 byte.
 Thus, the RFC states that the storage maximum storage requirements for a
 multibyte character must be 4 bytes, as required.

 I know that RFCs are more of GUIDELINE, right? It's not like they could be
 considered a standard or anything! It's not like there should be an
 implicit contract when an implementor decides to use a label like "UTF-8",
 right?

 Because of you, we have to strip our reader's carefully crafted emojii.
 Because of you, our search term data will never be exact. Because of you,
 we have to spend COUNTLESS HOURS altering every table that we have (which
 is a lot, by the way) to make sure that we can support a standard that was
 written in 2003!

 A cursory search shows that shortly after 2003, MySQL release quality
 started to tank. I can only assume that was because of you.

 Jerk.

 * The default charset should be utf8mb4.
 * Alter and test critical business processes.
 * Change OrderedFunctionSet to generate the appropriate tables.
 * Generate ptosc or propagator scripts to update everything else, as needed.
 * Curse the MySQL developer who caused this.

News stories from Wednesday 28 January, 2015

Favicon for #openttdcoop 23:40 Server/DevZone Outtage » Post from #openttdcoop Visit off-site link

Hi,

As you may have noticed our services have received some outtage. This happend during a maintenance that was required for needed security updates related to CVE-2015-0235 (the glibc story / http://www.openwall.com/lists/oss-security/2015/01/27/9). When we rebooted the server the most scary thing happend for us. Our server did not return online. After some help from our hosting provider we managed to log back in.

To make the most out of this situation we immediatly also starting converting some of our local containers to a diskimage format (PLOOP / https://openvz.org/Ploop/Why). However because one of our main containers which has all the HG repositories has so many small files this conversion is taking longer then expected.

We want to apoligize for this situation and are waiting for this container conversion to finish. After this the most critical containers should all have been converted and most of the other ones are related to non-development stuff that should have no extended downtime like this.

Regards,

^Spike^

News stories from Tuesday 27 January, 2015

Favicon for #openttdcoop 20:26 RAWR!!! » Post from #openttdcoop Visit off-site link

Ladies and nutmen,

just now I am realizing I forgot to officially mention that I have been working on another project for the past months. RAWR Absolute World Replacement is currently 32bpp/ExtraZoom LANDSCAPE with ROADS and TRACKS. Eventually I am hoping to replace all the sprites the game needs, and the final output then could be a full base set.

Visually, the set is obviously 32bpp/ExtraZoom which looks relatively nice. Functionally, it lets you choose from the 4 climates and force any of them visually. That way you can apply any of them you want – especially if you load the newGRF as a static one. I hope you like it, there is still a lot of things to be done, but the core is there.

The project home is at the devzone per usual – you can also find a guide on how to apply static NewGRFs. I also have a thread at tt-forums, you are welcome to contribute/place your impressions/screenshots there 🙂

You can download RAWR from the online content – BaNaNaS – through the game, or from the website manually.
Enjoy and let me know what you think!

V

RAWR_001

News stories from Tuesday 20 January, 2015

Favicon for Joel on Software 01:14 Stack Exchange Raises $40m » Post from Joel on Software Visit off-site link

Today Stack Exchange is pleased to announce that we have raised $40 million, mostly from Andreessen Horowitz.

Everybody wants to know what we’re going to do with all that money. First of all, of course we’re going to gold-plate the Aeron chairs in the office. Then we’re going to upgrade the game room, and we’re already sending lox platters to our highest-rep users.

But I’ll get into that in a minute. First, let me catch everyone up on what’s happening at Stack Exchange.

In 2008, Jeff Atwood and I set out to fix a problem for programmers. At the time, getting answers to programming questions online was super annoying. The answers that we needed were hidden behind paywalls, or buried in thousands of pages of stale forums.

So we built Stack Overflow with a single-minded, compulsive, fanatical obsession with serving programmers with a better Q&A site.

Everything about how Stack Overflow works today was designed to make programmers’ jobs easier. We let members vote up answers, so we can show you the best answer first. We don’t allow opinionated questions, because they descend into flame wars that don’t help people who need an answer right now. We have scrupulously avoided any commercialization of our editorial content, because we want to have a site that programmers can trust.

Heck, we don’t even allow animated ads, even though they are totally standard on every other site on the Internet, because it would be disrespectful to programmers to strain their delicate eyes with a dancing monkey, and we can’t serve them 100% if we are distracting them with a monkey. That would only be serving them 98%. And we’re OBSESSED, so 98% is like, we might as well close this all down and go drive taxis in Las Vegas.

Anyway, it worked! Entirely thanks to you. An insane number of developers stepped up to pass on their knowledge and help others. Stack Overflow quickly grew into the largest, most trusted repository of programming knowledge in the world.

Quickly, Jeff and I discovered that serving programmers required more than just code-related questions, so we built Server Fault and Super User. And when that still didn’t satisfy your needs, we set up Stack Exchange so the community could create sites on new topics. Now when a programmer has to set up a server, or a PC, or a database, or Ubuntu, or an iPhone, they have a place to go to ask those questions that are full of the people who can actually help them do it.

But you know how programmers are. They “have babies.”  Or “take pictures of babies.” So our users started building Stack Exchange sites on unrelated topics, like parenting and photography, because the programmers we were serving expected—nay, demanded!—a place as awesome as Stack Overflow to ask about baby feeding schedules and f-stops and whatnot.

And we did such a good job of serving programmers that a few smart non-programmers looked at us and said, “Behold! I want that!” and we thought, hey!  What works for developers should work for a lot of other people, too, as long as they’re willing to think like developers, which is the best way to think. So, we decided that anybody who wants to get with the program is welcome to join in our plan. And these sites serve their own communities of, you know, bicycle mechanics, or what have you, and make the world safer for the Programmer Way Of Thinking and thus serve programmers by serving bicycle mechanics.

In the five years since then, our users have built 133 communities. Stack Overflow is still the biggest. It reminds me of those medieval maps of the ancient world. The kind that shows a big bustling city (Jerusalem) smack dab in the middle, with a few smaller settlements around the periphery. (Please imagine Gregorian chamber music).


View of Jerusalem
Stack Overflow is the big city in the middle. Because the programmer-city worked so well, people wanted to ask questions about other subjects, so we let them build other Q&A villages in the catchment area of the programmer-city. Some of these Q&A villages became cities of their own. The math cities barely even have any programmers and they speak their own weird language. They are math-Jerusalem. They makes us very proud. Even though they don’t directly serve programmers, we love them and they bring a little tear to our eyes, like the other little villages, and they’re certainly making the Internet—and the world—better, so we’re devoted to them.

One of these days some of those villages will be big cities, so we’re committed to keeping them clean, and pulling the weeds, and helping them grow.

But let’s go back to programmer Jerusalem, which—as you might expect—is full of devs milling about, building the ENTIRE FUTURE of the HUMAN RACE, because, after all, software is eating the world and writing software is just writing a script for how the future will play out.

So given the importance of software and programmers, you might think they all had wonderful, satisfying jobs that they love.

But sadly, we saw that was not universal. Programmers often have crappy jobs, and their bosses often poke them with sharp sticks. They are underpaid, and they aren’t learning things, and they are sometimes overqualified, and sometimes underqualified. So we decided we could actually make all the programmers happier if we could move them into better jobs.

That’s why we built Stack Overflow Careers. This was the first site that was built for developers, not recruiters. We banned the scourge of contingency recruiters (even if they have big bank accounts and are just LINING UP at the Zion Gate trying to get into our city to feed on programmer meat, but, to hell with them). We are SERVING PROGRAMMERS, not spammers. Bye Felicia.

Which brings us to 2015.

The sites are still growing like crazy. By our measurements, the Stack Exchange network is already in the top 50 of all US websites, ranked by number of unique visitors, with traffic still growing at 25% annually. The company itself has passed 200 employees worldwide, with big plush offices in Denver, New York, and London, and dozens of amazing people who work from the comfort of their own homes. (By the way, if 200 people seems like a lot, keep in mind that more than half of them are working on Stack Overflow Careers).

We could just slow down our insane hiring pace and get profitable right now, but it would mean foregoing some of the investments that let us help more developers. To be honest, we literally can’t keep up with the features we want to build for our users. The code is not done yet—we’re dedicating a lot of resources to the core Q&A engine. This year we’ll work on improving the experience for both new users and highly experienced users.

And let’s not forget Stack Overflow Careers. I believe it is, bar-none, the single best job board for developer candidates, which should  automatically make it the best place for employers to find developer talent. There’s a LOT more to be done to serve developers here and we’re just getting warmed up.

So that’s why we took this new investment of $40m.

We’re ecstatic to have Andreessen Horowitz on board. The partners there believe in our idea of programmers taking over (it was Marc Andreessen who coined the phrase “Software is eating the world”). Chris Dixon has been a personal investor in the company since the beginning and has always known we’d be the obvious winner in the Q&A category, and will be joining our board of directors as an observer.

This is not the first time we’ve raised money; we’re proud to have previously taken investments from Union Square Ventures, Index Ventures, Spark Capital, and Bezos Expeditions. We only take outside money when we are 100% confident that the investors share our philosophy completely and after our lawyers have done a ruthless (sorry, investors) job of maintaining control so that it is literally impossible for anyone to mess up our vision of fanatically serving the people who use our site, and continuing to make the Internet a better place to get expert answers to your questions.

For those of you who have been with us since the early days of Our Incredible Journey, thank you. For those of you who are new, welcome. And if you want to learn more, check out our hott new “about” page. Or ask!

News stories from Wednesday 14 January, 2015

Favicon for Web Mozarts 16:39 Resource Discovery with Puli » Post from Web Mozarts Visit off-site link

Two days ago, I announced Puli’s first beta release. If you haven’t heard about Puli before, I recommend you to read that blog post as well as the Puli at a Glance guide in Puli’s documentation.

Today, I would like to show you how Puli’s Discovery Component helps you to build and use powerful Composer packages with less work and more fun than ever before.

The Problem

Many libraries support configuration code, translations, HTML themes or other content in files of a specific format. The Doctrine ORM, for example, is able to load entity mappings from special XML files:

<!-- res/config/doctrine/Acme.Blog.Post.dcm.xml -->
<doctrine-mapping ...>
    <entity name="Acme\Blog\Post">
        <field name="name" type="string" />
    </entity>
</doctrine-mapping>

This mapping, stored in the file Acme.Blog.Post.dcm.xml in our fictional “acme/blog” package, contains all the information Doctrine needs to save our Acme\Blog\Post object in the database.

When setting up Doctrine, we need to pass the location of the *.dcm.xml file to Doctrine’s XmlDriver. That’s easy as long as we do it ourselves, but:

  • What if someone else uses our package? How will they find our file?
  • What if multiple packages provide *.dcm.xml files? How do we find all these files?
  • We need to remove the appropriate setup code after removing a package.
  • We need to adapt the setup code after installing a new package.

Multiply this effort for every other library that uses user-provided files and you end up with a lot of configuration effort. Let’s see how Puli helps us to fix this.

Package Roles

For better understanding, it’s useful to assign two different roles to our packages:

  • Resource consumers, like Doctrine, process files of a certain format.
  • Resource providers, like our “acme/blog” package, ship such files.

Puli connects consumers and providers through a mechanism called resource binding. Resource binding is a very simple mechanism:

  1. At first, the consumer defines a binding type.
  2. Then, one or multiple providers bind resources to these types.
  3. Finally, the consumer fetches all the resources bound to their type and does something with them.

Let’s put on the hat of a Doctrine developer and see how this works in practice.

Discovering Resources

We start by defining the binding type “doctrine/xml-mapping” with Puli’s Command Line Interface (CLI):

$ puli type define doctrine/xml-mapping \
    --description "An XML entity mapping loaded by Doctrine's PuliDriver"

We passed a nicely readable description that is displayed when typing puli type:

Result of the command "puli type"

Great! Now we’ll use Puli’s ResourceDiscovery to find all the Puli resources bound to our type:

foreach ($discovery->find('doctrine/xml-mapping') as $binding) {
    foreach ($binding->getResources() as $resource) {
        // load $resource
    }
}

Remember we’re still wearing the Doctrine developer hat? Let’s put this code into a PuliDriver class so that anybody can easily configure Doctrine to load Puli resources.

Binding Resources

Now, we’ll put on the “acme/blog” developer hat. Let’s bind the XML file from before to Doctrine’s binding type:

$ puli bind /acme/blog/config/doctrine/*.xml doctrine/xml-mapping

The bind command accepts two parameters:

  • The path or glob for the Puli resources we want to bind.
  • The name of the binding type.

We can use puli find to check which resources match the binding:

Result of the command "puli find"

Apparently our XML file was registered successfully.

Application Setup

We’ll change hats one last time. This time, we’ll wear your hat. What do we have to do to use both the “doctrine/orm” package and the “acme/blog” package in our application?

The first thing obviously is to install the packages and the Puli CLI with Composer:

$ composer require doctrine/orm acme/blog puli/cli

Once this is done, we have to configure Doctrine to use the PuliDriver:

use Doctrine\ORM\Configuration;
 
// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repo = $factory->createRepository();
$discovery = $factory->createDiscovery($repo);
 
// Doctrine setup
$config = new Configuration();
$config->setMetadataDriverImpl(new PuliDriver($discovery));
 
// ...

With as little effort as this, Doctrine will now use all the resources bound to the “doctrine/xml-mapping” type in any installed Composer package.

Will it though?

Enabled and Disabled Bindings

Automatically loading stuff from all Composer packages is a bit scary, hence Puli does not enable bindings in your installed packages by default. We can see these bindings when typing puli bind:

Result of the command "puli bind"

If we trust the “acme/blog” developer and actually want to use the binding, we can do so by typing:

$ puli bind --enable 653fc9

That’s all, folks. :) Read more about resource discovery with Puli in the Resource Discovery guide in the documentation. And please leave me your comments below.

News stories from Monday 12 January, 2015

Favicon for Web Mozarts 19:59 Puli 1.0 Beta Released » Post from Web Mozarts Visit off-site link

Today marks the end of a month of very intense development of the Puli library. On December 3rd, 2014 the first alpha version of most of the Puli components and extensions was released. Today, a little more than a month later, I am proud to present to you the first beta release of all the libraries in the Puli ecosystem!

What is Puli?

If you missed my previous blog post, you are probably wondering what this Puli thing is. In short, Puli (pronounced “poo-lee”) is a toolkit which lets you map paths of a virtual resource repository to paths in your Composer package. For example, as the developer of the “acme/blog” package, I can map the path “/acme/blog” to the “res” directory in my package:

$ puli map /acme/blog res

After running this command, I can access all the files in my “res” directory through the Puli path “/acme/blog”. For example, if I’m using Puli’s Twig extension:

// res/views/post.html.twig
echo $twig->render('/acme/blog/views/post.html.twig');

But not only I can do this. Every developer using my package can do the same. And I can use the Puli paths of every other package. Basically, Puli is like PSR-4 autoloading for anything that’s not PHP.

You should read the Puli at a Glance guide to learn more about Puli’s exciting possibilities.

The Puli Components

Puli consists of a few core components that implement Puli’s basic functionality. First, let’s talk about the components that you are most likely to integrate into your applications and libraries:

  • The Repository Component implements a PHP API for the persistent storage of arbitrary resources in a resource repository:
    use Puli\Repository\FilesystemRepository;
    use Puli\Repository\Resource\DirectoryResource;
     
    $repo = new FilesystemRepository();
    $repo->add('/config', new DirectoryResource('/path/to/resources/config'));
     
    // /path/to/resources/config/routing.yml
    echo $repo->get('/config/routing.yml')->getBody();
  • The Discovery Component allows you to define binding types and let other packages bind resources to these types. Read the Resource Discovery guide in the documentation to learn more about this topic.
  • The Factory Component contains a single interface PuliFactory. This interface creates repositories and discoveries for you. You can either implement the interface manually, or – and that’s what you usually do – let Puli generate one for you.

Next come the components that you use as a developer in your daily life:

  • The Command Line Interface (CLI) lets you map repository paths, browse the repository, define binding types and bindings and much more by typing a few simple commands in your terminal. The CLI also builds a factory that you can use to load the repository and the discovery in your code:
    $factoryClass = PULI_FACTORY_CLASS;
    $factory = new $factoryClass();
     
    // If you need the resource repository
    $repo = $factory->createRepository();
     
    // If you need the resource discovery
    $discovery = $factory->createDiscovery($repo);

    The configuration that you pass to the CLI is stored in a puli.json file in the root of your Composer package. This file should be distributed with your package.

  • The Composer Plugin loads the puli.json files of all installed Composer packages. Through the plugin, you can access any of the resources and bindings that come with any of the libraries you use.
  • The Repository Manager implements the actual business logic behind the CLI and the Composer Plugin. This is Puli’s workhorse.

The Puli Extensions

Currently, Puli features a few extensions that are mostly targeted at the Symfony ecosystem, because – quite simply – that’s the framework I know best. As soon as the first stable release of Puli is out, I would like to work on extensions for other PHP frameworks, but I could need your help with that.

The following extensions are currently available:

Supporting Libraries

During Puli’s development, I created a few small supporting libraries that I couldn’t find in the high quality that I needed to build a solid foundation for Puli. These libraries also had their release today:

  • webmozart/path-util provides robust, cross-platform utility functions for normalizing and transforming filesystem paths. After using it for a few months, I love its simplicity already. I highly recommend to give it a try.
  • webmozart/key-value-store provides a simple yet robust KeyValueStore interface with implementations for various backends.
  • webmozart/json is a wrapper for json_encode()/json_decode() that normalizes their behavior across PHP versions and features integrated JSON Schema validation.
  • webmozart/glob implements Git-like globbing in that wildcards (“*”) match both characters and directory separators. I was made aware today that a similar utility seems to exist in the Symfony Finder component, so I’ll look about merging the two packages.

Road Map

I would like to release a stable version of the fundamental Repository, Discovery and Factory components by the end of January 2015. These components are quite stable already and I don’t expect any serious changes.

The CLI, Composer Plugin and Repository Manager are a bit more complex. They have undergone heavy changes during the last weeks. All the functionality that is planned for the final release is implemented now, but the components need testing and polishing. I plan to release a final version of these packages in February or March 2015.

Feedback Wanted

To permit a successful stable release, I need your feedback! Please integrate Puli, test it and use it. However – as with any beta version – please don’t use it in production.

Read Puli at a Glance and Getting Started to get started. Happy coding! :)

Please leave me your feedback below. Follow PuliPHP on Twitter to receive all the latest news about Puli.

News stories from Friday 09 January, 2015

Favicon for Grumpy Gamer 17:49 I Was A Teengage Lobot » Post from Grumpy Gamer Visit off-site link

This was the first design document I worked on while at Lucasfilm Games. It was just after Koronis Rift finished and I was really hoping I wouldn't get laid off.  When I first joined Lucasfilm, I was a contractor, not an employee. I don't remember why that was, but I wanted to get hired on full time. I guess I figured I'd show how indispensable I was by helping to churn out game design gold like this.

This is probably one of the first appearances of "Chuck", who would go on to "Chuck the Plant" fame.

You'll also notice the abundance of TM's all over the doc. That joke never gets old.  Right?

Many thanks to Aric Wilmunder for saving this document.

Shameless plug to visit the Thimbleweed Park Development Diary.

lobots_1_thumb.jpglobots_2_thumb.jpglobots_3_thumb.jpglobots_4_thumb.jpglobots_5_thumb.jpglobots_6_thumb.jpglobots_7_thumb.jpglobots_8_thumb.jpglobots_9_thumb.jpglobots_10_thumb.jpglobots_11_thumb.jpglobots_12_thumb.jpglobots_13_thumb.jpglobots_14_thumb.jpglobots_15_thumb.jpglobots_16_thumb.jpglobots_17_thumb.jpglobots_18_thumb.jpg

News stories from Friday 02 January, 2015

Favicon for Grumpy Gamer 00:40 Thimbleweed Park Development Diary » Post from Grumpy Gamer Visit off-site link

The Thimbleweed Park Development Diary is now live. Updated at least every Monday, probably much more.

News stories from Wednesday 31 December, 2014

Favicon for ircmaxell's blog 20:00 2014 - A Year In Review » Post from ircmaxell's blog Visit off-site link
Wow, another year gone by. Where does the time go? Well, considering I've written a year-end summary the past 2 years, I've decided to do it again for this year. So here it is, 2014 in review:

Read more »
Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 30 December, 2014

Favicon for ircmaxell's blog 19:00 PHP Install Statistics » Post from ircmaxell's blog Visit off-site link
After yesterday's post, I decided to do some math to see how many PHP installs had at least 1 known security vulnerability. So I went to grab statistics from W3Techs, and correlated that with known Linux Distribution supported numbers. I then whipped up a spreadsheet and got some interesting numbers out of it. So interesting, that I need to share...
Read more »
Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 29 December, 2014

Favicon for ircmaxell's blog 21:00 Being A Responsible Developer » Post from ircmaxell's blog Visit off-site link
Last night, I was listening to the combined DevHell and PHPTownHall Mashup podcast recording, listening to them discuss a topic I talked about in my last blog post. While they definitely understood my points, they for the most part disagreed with me (there was some contention in the discussion though). I don't mind that they disagreed, but I was rather taken aback by their justification. Let me explain...

Read more »
Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Thursday 25 December, 2014

Favicon for #openttdcoop 23:35 New member: Hazzard » Post from #openttdcoop Visit off-site link

Hell000 and Merry Christmas! We are happy to announce that our inner circles have gained yet another person, Hazzard!

Being around for a long while, most of you probably know him, but if you don’t, Hazzard is a great builder and person. His logic mechanisms and other construction put your brains in greater hazard when you see them. He has been generally very helpful, teaching people, being a nice person, and everything else.

Everybody, please welcome Hazzard to the openttdcoop members club!

News stories from Wednesday 24 December, 2014

Favicon for Grumpy Gamer 21:54 Happy Holidays » Post from Grumpy Gamer Visit off-site link

happy_holidays_2014.png

News stories from Monday 22 December, 2014

Favicon for nikic's Blog 01:00 PHP's new hashtable implementation » Post from nikic's Blog Visit off-site link

About three years ago I wrote an article analyzing the memory usage of arrays in PHP 5. As part of the work on the upcoming PHP 7, large parts of the Zend Engine have been rewritten with a focus on smaller data structures requiring fewer allocations. In this article I will provide an overview of the new hashtable implementation and show why it is more efficient than the previous implementation.

To measure memory utilization I am using the following script, which tests the creation of an array with 100000 distinct integers:

$startMemory = memory_get_usage();
$array = range(1, 100000);
echo memory_get_usage() - $startMemory, " bytes\n";

The following table shows the results using PHP 5.6 and PHP 7 on 32bit and 64bit systems:

        |   32 bit |    64 bit
------------------------------
PHP 5.6 | 7.37 MiB | 13.97 MiB
------------------------------
PHP 7.0 | 3.00 MiB |  4.00 MiB

In other words, arrays in PHP 7 use about 2.5 times less memory on 32bit and 3.5 on 64bit (LP64), which is quite impressive.

Introduction to hashtables

In essence PHP’s arrays are ordered dictionaries, i.e. they represent an ordered list of key/value pairs, where the key/value mapping is implemented using a hashtable.

A Hashtable is an ubiquitous data structure, which essentially solves the problem that computers can only directly represent continuous integer-indexed arrays, whereas programmers often want to use strings or other complex types as keys.

The concept behind a hashtable is very simple: The string key is run through a hashing function, which returns an integer. This integer is then used as an index into a “normal” array. The problem is that two different strings can result in the same hash, as the number of possible strings is virtually infinite while the hash is limited by the integer size. As such hashtables need to implement some kind of collision resolution mechanism.

There are two primary approaches to collision resolution: Open addressing, where elements will be stored at a different index if a collision occurs, and chaining, where all elements hashing to the same index are stored in a linked list. PHP uses the latter mechanism.

Typically hashtables are not explicitly ordered: The order in which elements are stored in the underlying array depends on the hashing function and will be fairly random. But this behavior is not consistent with the semantics of PHP arrays: If you iterate over a PHP array you will get back the elements in the exact order in which they were inserted. This means that PHP’s hashtable implementation has to support an additional mechanism for remembering the order of array elements.

The old hashtable implementation

I’ll only provide a short overview of the old hashtable implementation here, for a more comprehensive explanation please see the hashtable chapter of the PHP Internals Book. The following graphic is a very high-level view of how a PHP 5 hashtable looks like:

basic_hashtable.svg

The elements in the “collision resolution” chain are referred to as “buckets”. Every bucket is individually allocated. What the image glosses over are the actual values stored in these buckets (only the keys are shown here). Values are stored in separately allocated zval structures, which are 16 bytes (32bit) or 24 bytes (64bit) large.

Another thing the image does not show is that the collision resolution list is actually a doubly linked list (which simplifies deletion of elements). Next to the collision resolution list, there is another doubly linked list storing the order of the array elements. For an array containing the keys "a", "b", "c" in this order, this list could look as follows:

ordered_hashtable.svg

So why was the old hashtable structure so inefficient, both in terms of memory usage and performance? There are a number of primary factors:

  • Buckets require separate allocations. Allocations are slow and additionally require 8 / 16 bytes of allocation overhead. Separate allocations also means that the buckets will be more spread out in memory and as such reduce cache efficiency.
  • Zvals also require separate allocations. Again this is slow and incurs allocation header overhead. Furthermore this requires us to store a pointer to a zval in each bucket. Because the old implementation was overly generic it actually needed not just one, but two pointers for this.
  • The two doubly linked lists require a total of four pointers per bucket. This alone takes up 16 / 32 bytes.. Furthermore traversing linked lists is a very cache-unfriendly operation.

The new hashtable implementation tries to solve (or at least ameliorate) all of these problems.

The new zval implementation

Before getting to the actual hashtable, I’d like to take a quick look at the new zval structure and highlight how it differs from the old one. The zval struct is defined as follows:

struct _zval_struct {
	zend_value value;
	union {
		struct {
			ZEND_ENDIAN_LOHI_4(
				zend_uchar type,
				zend_uchar type_flags,
				zend_uchar const_flags,
				zend_uchar reserved)
		} v;
		uint32_t type_info;
	} u1;
	union {
		uint32_t var_flags;
		uint32_t next;       /* hash collision chain */
		uint32_t cache_slot; /* literal cache slot */
		uint32_t lineno;     /* line number (for ast nodes) */
	} u2;
};

You can safely ignore the ZEND_ENDIAN_LOHI_4 macro in this definition - it is only present to ensure a predictable memory layout across machines with different endianness.

The zval structure has three parts: The first member is the value. The zend_value union is 8 bytes large and can store different kinds of values, including integers, strings, arrays, etc. What is actually stored in there depend on the zval type.

The second part is the 4 byte type_info, which consists of the actual type (like IS_STRING or IS_ARRAY), as well as a number of additional flags providing information about this type. E.g. if the zval is storing an object, then the type flags would say that it is a non-constant, refcounted, garbage-collectible, non-copying type.

The last 4 bytes of the zval structure are normally unused (it’s really just explicit padding, which the compiler would introduce automatically otherwise). However in special contexts this space is used to store some extra information. E.g. AST nodes use it to store a line number, VM constants use it to store a cache slot index and hashtables use it to store the next element in the collision resolution chain - that last part will be important to us.

If you compare this to the previous zval implementation, one difference particularly stands out: The new zval structure no longer stores a refcount. The reason behind this, is that the zvals themselves are no longer individually allocated. Instead the zval is directly embedded into whatever is storing it (e.g. a hashtable bucket).

While the zvals themselves no longer use refcounting, complex data types like strings, arrays, objects and resources still use them. Effectively the new zval design has pushed out the refcount (and information for the cycle-collector) from the zval to the array/object/etc. There are a number of advantages to this approach, some of them listed in the following:

  • Zvals storing simple values (like booleans, integers or floats) no longer require any allocations. So this saves the allocation header overhead and improves performance by avoiding unnecessary allocs and frees and improving cache locality.
  • Zvals storing simple values don’t need to store a refcount and GC root buffer.
  • We avoid double refcounting. E.g. previously objects both used the zval refcount and an additional object refcount, which was necessary to support by-object passing semantics.
  • As all complex values now embed a refcount, they can be shared independently of the zval mechanism. In particular it is now also possible to share strings. This is important to the hashtable implementation, as it no longer needs to copy non-interned string keys.

The new hashtable implementation

With all the preliminaries behind us, we can finally look at the new hashtable implementation used by PHP 7. Lets start by looking at the bucket structure:

typedef struct _Bucket {
	zend_ulong        h;
	zend_string      *key;
	zval              val;
} Bucket;

A bucket is an entry in the hashtable. It contains pretty much what you would expect: A hash h, a string key key and a zval value val. Integer keys are stored in h (the key and hash are identical in this case), in which case the key member will be NULL.

As you can see the zval is directly embedded in the bucket structure, so it doesn’t have to be allocated separately and we don’t have to pay for allocation overhead.

The main hashtable structure is more interesting:

typedef struct _HashTable {
	uint32_t          nTableSize;
	uint32_t          nTableMask;
	uint32_t          nNumUsed;
	uint32_t          nNumOfElements;
	zend_long         nNextFreeElement;
	Bucket           *arData;
	uint32_t         *arHash;
	dtor_func_t       pDestructor;
	uint32_t          nInternalPointer;
	union {
		struct {
			ZEND_ENDIAN_LOHI_3(
				zend_uchar    flags,
				zend_uchar    nApplyCount,
				uint16_t      reserve)
		} v;
		uint32_t flags;
	} u;
} HashTable;

The buckets (= array elements) are stored in the arData array. This array is allocated in powers of two, with the size being stored in nTableSize (the minimum value is 8). The actual number of stored elements is nNumOfElements. Note that this array directly contains the Bucket structures. Previously we used an array of pointers to separately allocated buckets, which means that we needed more alloc/frees, had to pay allocation overhead and also had to pay for the extra pointer.

Order of elements

The arData array stores the elements in order of insertion. So the first array element will be stored in arData[0], the second in arData[1] etc. This does not in any way depend on the used key, only the order of insertion matters here.

So if you store five elements in the hashtable, slots arData[0] to arData[4] will be used and the next free slot is arData[5]. We remember this number in nNumUsed. You may wonder: Why do we store this separately, isn’t it the same as nNumOfElements?

It is, but only as long as only insertion operations are performed. If an element is deleted from a hashtable, we obviously don’t want to move all elements in arData that occur after the deleted element in order to have a continuous array again. Instead we simply mark the deleted value with an IS_UNDEF zval type.

As an example, consider the following code:

$array = [
	'foo' => 0,
	'bar' => 1,
	0     => 2,
	'xyz' => 3,
	2     => 4
];
unset($array[0]);
unset($array['xyz']);

This will result in the following arData structure:

nTableSize     = 8
nNumOfElements = 3
nNumUsed       = 5

[0]: key="foo", val=int(0)
[1]: key="bar", val=int(1)
[2]: val=UNDEF
[3]: val=UNDEF
[4]: h=2, val=int(4)
[5]: NOT INITIALIZED
[6]: NOT INITIALIZED
[7]: NOT INITIALIZED

As you can see the first five arData elements have been used, but elements at position 2 (key 0) and 3 (key 'xyz') have been replaced with an IS_UNDEF tombstone, because they were unset. These elements will just remain wasted memory for now. However, once nNumUsed reaches nTableSize PHP will try compact the arData array, by dropping any UNDEF entries that have been added along the way. Only if all buckets really contain a value the arData will be reallocated to twice the size.

The new way of maintaining array order has several advantages over the doubly linked list that was used in PHP 5.x. One obvious advantage is that we save two pointers per bucket, which corresponds to 8/16 bytes. Additionally it means that iterating an array looks roughly as follows:

uint32_t i;
for (i = 0; i < ht->nNumUsed; ++i) {
	Bucket *b = &ht->arData[i];
	if (Z_ISUNDEF(b->val)) continue;

	// do stuff with bucket
}

This corresponds to a linear scan of memory, which is much more cache-efficient than a linked list traversal (where you go back and forth between relatively random memory addresses).

One problem with the current implementation is that arData never shrinks (unless explicitly told to). So if you create an array with a few million elements and remove them afterwards, the array will still take a lot of memory. We should probably half the arData size if utilization falls below a certain level.

Hashtable lookup

Until now we have only discussed how PHP arrays represent order. The actual hashtable lookup uses the second arHash array, which consists of uint32_t values. The arHash array has the same size (nTableSize) as arData and both are actually allocated as one chunk of memory.

The hash returned from the hashing function (DJBX33A for string keys) is a 32-bit or 64-bit unsigned integer, which is too large to directly use as an index into the hash array. We first need to adjust it to the table size using a modulus operation. Instead of hash % ht->nTableSize we use hash & (ht->nTableSize - 1), which is the same if the size is a power of two, but doesn’t require expensive integer division. The value ht->nTableSize - 1 is stored in ht->nTableMask.

Next, we look up the index idx = ht->arHash[hash & ht->nTableMask] in the hash array. This index corresponds to the head of the collision resolution list. So ht->arData[idx] is the first entry we have to examine. If the key stored there matches the one we’re looking for, we’re done.

Otherwise we must continue to the next element in the collision resolution list. The index to this element is stored in bucket->val.u2.next, which are the normally unused last four bytes of the zval structure that get a special meaning in this context. We continue traversing this linked list (which uses indexes instead of pointers) until we either find the right bucket or hit an INVALID_IDX - which means that an element with the given key does not exist.

In code, the lookup mechanism looks like this:

zend_ulong h = zend_string_hash_val(key);
uint32_t idx = ht->arHash[h & ht->nTableMask];
while (idx != INVALID_IDX) {
	Bucket *b = &ht->arData[idx];
	if (b->h == h && zend_string_equals(b->key, key)) {
		return b;
	}
	idx = Z_NEXT(b->val); // b->val.u2.next
}
return NULL;

Lets consider how this approach improves over the previous implementation: In PHP 5.x the collision resolution used a doubly linked pointer list. Using uint32_t indices instead of pointers is better, because they take half the size on 64bit systems. Additionally fitting in 4 bytes means that we can embed the “next” link into the unused zval slot, so we essentially get it for free.

We also use a singly linked list now, there is no “prev” link anymore. The prev link is primarily useful for deleting elements, because you have to adjust the “next” link of the “prev” element when you perform a deletion. However, if the deletion happens by key, you already know the previous element as a result of traversing the collision resolution list.

The few cases where deletion occurs in some other context (e.g. “delete the element the iterator is currently at”) will have to traverse the collision list to find the previous element. But as this is a rather unimportant scenario, we prefer saving memory over saving a list traversal for that case.

Packed hashtables

PHP uses hashtables for all arrays. However in the rather common case of continuous, integer-indexed arrays (i.e. real arrays) the whole hashing thing doesn’t make much sense. This is why PHP 7 introduces the concept of “packed hashtables”.

In packed hashtables the arHash array is NULL and lookups will directly index into arData. If you’re looking for the key 5 then the element will be located at arData[5] or it doesn’t exist at all. There is no need to traverse a collision resolution list.

Note that even for integer indexed arrays PHP has to maintain order. The arrays [0 => 1, 1 => 2] and [1 => 2, 0 => 1] are not the same. The packed hashtable optimization only works if keys are in ascending order. There can be gaps in between them (the keys don’t have to be continuous), but they need to always increase. So if elements are inserted into an array in a “wrong” order (e.g. in reverse) the packed hashtable optimization will not be used.

Note furthermore that packed hashtables still store a lot of useless information. For example we can determine the index of a bucket based on its memory address, so bucket->h is redundant. The value bucket->key will always be NULL, so it’s just wasted memory as well.

We keep these useless values around so that buckets always have the same structure, independently of whether or not packing is used. This means that iteration can always use the same code. However we might switch to a “fully packed” structure in the future, where a pure zval array is used if possible.

Empty hashtables

Empty hashtables get a bit of special treating both in PHP 5.x and PHP 7. If you create an empty array [] chances are pretty good that you won’t actually insert any elements into it. As such the arData/arHash arrays will only be allocated when the first element is inserted into the hashtable.

To avoid checking for this special case in many places, a small trick is used: While the nTableSize is set to either the hinted size or the default value of 8, the nTableMask (which is usually nTableSize - 1) is set to zero. This means that hash & ht->nTableMask will always result in the value zero as well.

So the arHash array for this case only needs to have one element (with index zero) that contains an INVALID_IDX value (this special array is called uninitialized_bucket and is allocated statically). When a lookup is performed, we always find the INVALID_IDX value, which means that the key has not been found (which is exactly what you want for an empty table).

Memory utilization

This should cover the most important aspects of the PHP 7 hashtable implementation. First lets summarize why the new implementation uses less memory. I’ll only use the numbers for 64bit systems here and only look at the per-element size, ignoring the main HashTable structure (which is less significant asymptotically).

In PHP 5.x a whopping 144 bytes per element were required. In PHP 7 the value is down to 36 bytes, or 32 bytes for the packed case. Here’s where the difference comes from:

  • Zvals are not individually allocated, so we save 16 bytes allocation overhead.
  • Buckets are not individually allocated, so we save another 16 bytes of allocation overhead.
  • Zvals are 16 bytes smaller for simple values.
  • Keeping order no longer needs 16 bytes for a doubly linked list, instead the order is implicit.
  • The collision list is now singly linked, which saves 8 bytes. Furthermore it’s now an index list and the index is embedded into the zval, so effectively we save another 8 bytes.
  • As the zval is embedded into the bucket, we no longer need to store a pointer to it. Due to details of the previous implementation we actually save two pointers, so that’s another 16 bytes.
  • The length of the key is no longer stored in the bucket, which is another 8 bytes. However, if the key is actually a string and not an integer, the length still has to be stored in the zend_string structure. The exact memory impact in this case is hard to quantify, because zend_string structures are shared, whereas previously hashtables had to copy the string if it wasn’t interned.
  • The array containing the collision list heads is now index based, so saves 4 bytes per element. For packed arrays it is not necessary at all, in which case we save another 4 bytes.

However it should be clearly said that this summary is making things look better than they really are in several respects. First of all, the new hashtable implementation uses a lot more embedded (as opposed to separately allocated) structures. How can this negatively affect things?

If you look at the actually measured numbers at the start of this article, you’ll find that on 64bit PHP 7 an array with 100000 elements took 4.00 MiB of memory. In this case we’re dealing with a packed array, so we would actually expect 32 * 100000 = 3.05 MiB memory utilization. The reason behind this is that we allocate everything in powers of two. The nTableSize will be 2^17 = 131072 in this case, so we’ll allocate 32 * 131072 bytes of memory (which is 4.00 MiB).

Of course the previous hashtable implementation also used power of two allocations. However it only allocated an array with bucket pointers in this way (where each pointer is 8 bytes). Everything else was allocated on demand. So in PHP 7 we loose 32 * 31072 (0.95 MiB) in unused memory, while in PHP 5.x we only waste 8 * 31072 (0.24 MiB).

Another thing to consider is what happens if not all values stored in the array are distinct. For simplicity lets assume that all values in the array are identical. So lets replace the range in the starting example with an array_fill:

$startMemory = memory_get_usage();
$array = array_fill(0, 100000, 42);
echo memory_get_usage() - $startMemory, " bytes\n";

This script results in the following numbers:

        |   32 bit |    64 bit
------------------------------
PHP 5.6 | 4.70 MiB |  9.39 MiB
------------------------------
PHP 7.0 | 3.00 MiB |  4.00 MiB

As you can see the memory usage on PHP 7 stays the same as in the range case. There is no reason why it would change, as all zvals are separate. On PHP 5.x on the other hand the memory usage is now significantly lower, because only one zval is used for all values. So while we’re still a good bit better off on PHP 7, the difference is smaller now.

Things become even more complicated once we consider string keys (which may or not be shared or interned) and complex values. The point being that arrays in PHP 7 will take significantly less memory than in PHP 5.x, but the numbers from the introduction are likely too optimistic in many cases.

Performance

I’ve already talked a lot about memory usage, so lets move to the next point, namely performance. In the end, the goal of the phpng project wasn’t to improve memory usage, but to improve performance. The memory utilization improvement is only a means to an end, in that less memory results in better CPU cache utilization, resulting in better performance.

However there are of course a number of other reasons why the new implementation is faster: First of all we need less allocations. Depending on whether or not values are shared we save two allocations per element. Allocations being rather expensive operations this is quite significant.

Array iteration in particular is now more cache-friendly, because it’s now a linear memory traversal, instead of a random-access linked list traversal.

There’s probably a lot more to be said on the topic of performance, but the main interest in this article was memory usage, so I won’t go into further detail here.

Closing thoughts

PHP 7 undoubtedly has made a big step forward as far as the hashtable implementation is concerned. A lot of useless overhead is gone now.

So the question is: where we can go from here? One idea I already mentioned is to use “fully packed” hashes for the case of increasing integer keys. This would mean using a plain zval array, which is the best we can do without starting to specialize uniformly typed arrays.

There’s probably some other directions one could go as well. For example switching from collision-chaining to open addressing (e.g. using Robin Hood probing), could be better both in terms of memory usage (no collision resolution list) and performance (better cache efficiency, depending on the details of the probing algorithm). However open-addressing is relatively hard to combine with the ordering requirement, so this may not be possible to do in a reasonable way.

Another idea is to combine the h and key fields in the bucket structure. Integer keys only use h and string keys already store the hash in key as well. However this would likely have an adverse impact on performance, because fetching the hash will require an additional memory indirection.

One last thing that I wish to mention is that PHP 7 improved not only the internal representation of hashtables, but also the API used to work them. I’ve regularly had to look up how even simple operations like zend_hash_find had to be used, especially regarding how many levels of indirection are required (hint: three). In PHP 7 you just write zend_hash_find(ht, key) and get back a zval*. Generally I find that writing extensions for PHP 7 has become quite a bit more pleasant.

Hopefully I was able to provide you some insight into the internals of PHP 7 hashtables. Maybe I’ll write a followup article focusing on zvals. I’ve already touched on some of the difference in this post, but there’s a lot more to be said on the topic.

News stories from Friday 19 December, 2014

Favicon for ircmaxell's blog 20:00 On PHP Version Requirements » Post from ircmaxell's blog Visit off-site link
I learned something rather disturbing yesterday. CodeIgniter 3.0 will support PHP 5.2. To put that in context, there hasn't been a supported or secure version of PHP 5.2 since January, 2011. That's nearly 4 years. To me, that's beyond irresponsible... It's negligent... So I tweeted about it (not mentioning the project to give them the chance to realize what the problem was):

I received a bunch of replies. Many people thought I was talking about WordPress. I wasn't, but the same thing does apply to the project. Most people agreed with me, saying that not targeting 5.4 or higher is bad. But some disagreed. Some disagreed strongly. So, I want to talk about that.
Read more »
Ircmaxell?i=D0NG1B2KQZQ:uodvoLGl9OM:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=D0NG1B2KQZQ:uodvoLGl9OM:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs
Favicon for Grumpy Gamer 00:40 Funded! » Post from Grumpy Gamer Visit off-site link

Thimbleweed Park was funded with all stretch goals met, from translations to iOS and Android versions. We can't even begin to thank everyone for all the support and backing.

stretch_goals6.png

You can read the backer update here.

Gary and I are going to take a break during the holidays, then we'll start working full time on the Jan 2nd.

There will be a dev blog on thimbleweedpark.com where we'll talk about the game's development. Our goal is to post at least once a week going over art, puzzles, characters, design and code.

Once everything has cleared, I'm going to do a detailed blog post about the ups, downs and surprises of running a Kickstarter.

News stories from Thursday 18 December, 2014

Favicon for ircmaxell's blog 21:31 Stack Machines: Compilers » Post from ircmaxell's blog Visit off-site link
I have the honor today of writing a guest blog post on Igor Wiedler's Blog about Compilers. If you don't know @igorwhiletrue, he's pretty much the craziest developer that I know. And crazy in that genious sort of way. He's been doing a series of blog posts about Stack Machines and building complex runtimes from simple components. Well, today I authored a guest post on compiling code to run on said runtime. The compiler only took about 100 lines of code!!!

Check it out!

Ircmaxell?i=yhGGOSDuWzw:G7c4FXGF6zc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=yhGGOSDuWzw:G7c4FXGF6zc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 14 December, 2014

Favicon for Grumpy Gamer 19:31 Talkies » Post from Grumpy Gamer Visit off-site link

stretch_goals5.png

Really excited we made the Talkies stretch goal. Knowing that an actor is going to read lines you wrote is always exciting.

To answer some questions a few backers (or potential backers) have asked...

Yes, you will be able to turn the talkies off and just read the text.  Yes, you will be able to display the text on screen and listen to the talkies, or not display the text and just listen to the talkies.  And, yes, you will be able to skip each line if you like hearing the voice, but read really fast.  Back in the SCUMM days, the '.' key would end the current line and I plan on implementing that in Thimbleweed Park.  It will cut off the audio, but that's OK because the player is doing it.

Thanks so much for everyone's support and belief in this project. It's going to be a really fun year! Gary and I can't wait to start up the thimbleweedpark.com dev blog and start talking about the game.

News stories from Monday 08 December, 2014

Favicon for Grumpy Gamer 23:22 Talkies First » Post from Grumpy Gamer Visit off-site link

We’re going to swap the Talkies™ and the iOS/Android stretch goals and here is our logic…

stretch_goals4.png

We've heard from a lot of our backers through the comments, private messages and emails who want full voice in Thimbleweed Park. It might be a vocal minority, but it’s a lot more than just a few people. Gary and I also want to do full voice. I love hearing characters come to life though a great actor, it makes the game a lot more accessible, and it’s just a lot of fun to do.

The other reason is that distributing mobile versions to backers is way more complicated than PC/Mac/Linux, so we’re stuck in this situation where backers might need to buy the mobile versions and that’s a little awkward. Plus mobile ports are something we can potentially fund later if we don't hit the stretch goal, but voices need to be done as part of the initial development.

So, for these reasons, we’re going to swap the stretch goals to put talkies first and the mobile ports second. Of course we could still make both goals, and I hope we do! But if we don't... well, it feels like our backers would rather have talkies.

We hope this doesn’t create too much confusion. We wanted to give you some insight into our thought process. Gary and I like to think stuff through and not be impulsive. We might be a little slow, but we try to be very steady and reliable and in the end that's why we'll hopefully make a great game that we all love.

This doesn't mean we won't have iOS/Android ports.  I do most of my gaming on mobile and it they are really important, but it felt like the Talkies™ should be integrated into the main development, plus mobile players will get to enjoy them as well.

If you haven't already, please join us on Kickstarter!

News stories from Saturday 06 December, 2014

Favicon for Grumpy Gamer 19:28 Congratulation to Ken and Roberta! » Post from Grumpy Gamer Visit off-site link

Congratulation to Ken and Roberta for their Industry Icon Award. Well deserved.

Over the years, I’ve given Sierra a lot of crap, but the honest fact is that without King's Quest, there would be no Maniac Mansion or Monkey Island. It really did set the template that we all followed.


Kings_Quest_Tandy.png

I’ve told this story before, but you’re going to listen to it again…

A few months into Maniac Mansion, Gary and I had a bunch of fun ideas, some characters, and a creepy old mansion, but what we didn’t have was a game. There was nothing to hang any of our ideas on top of.

I was feeling a little lost. “There is no game”, I kept saying.

We had our christmas break and I went down to visit my Aunt and Uncle. My eight year old cousin was playing King's Quest I. I’d never seen the game before and I watched him for hours.  Everything Gary and I had been talking about suddenly made sense.  Maniac Mansion should be an adventure game.

Without King's Quest, I don’t know if that leap would have happened. No matter how innovative and new something is, it's always built on something else. Maniac Mansion and Monkey Island are built on King's Quest.

We always had a fun rivalry with Sierra and they always made us try harder and be better.

Thank you Ken and Roberta and everyone else at Sierra.

News stories from Wednesday 03 December, 2014

Favicon for Grumpy Gamer 21:23 Maniac Mansion Used a Joystick » Post from Grumpy Gamer Visit off-site link

The C64 version of Maniac Mansion didn't use a mouse, it used one of these:

mm_joystick.jpg

A year later we did the IBM PC version and it had keyboard support for moving the cursor because most PCs didn't have a mouse.  Monkey Island also had cursor key support because not everyone had a mouse.

Use the above facts to impress people at cocktail parties.

Favicon for Web Mozarts 17:49 Puli: Powerful Resource Management for PHP » Post from Web Mozarts Visit off-site link

Since the introduction of Composer and the autoloading standards PSR-0 and PSR-4, the PHP community changed a lot. Not so long ago, it was difficult to integrate third-party code into your application. Now, it has become a matter of running a command on the terminal and including Composer’s autoload file. As a result, developers share and reuse much more code than ever before.

Unfortunately, sharing your work gets a lot harder when you leave PHP code and enter the land of configuration files, images, CSS files, translation catalogs – in short, any file that is not PHP. For brevity, I’ll call these files resources here. Using resources located in Composer packages is quite tedious: You need to know exactly where the package is installed and where the resource is located in the package. That’s a lot of juggling with absolute and relative file system paths and prone to error.

Plugins, Modules, Bundles

To simplify matters, most frameworks implement their own mechanisms on top of Composer packages. Some call them “plugins”, others “modules”, “bundles” or “packages”. They have in common that they follow some sort of predefined directory layout together with a naming convention that lets you refer to resources in the package. In Symfony, for example, you can refer to a Twig template profiler.html.twig located in FancyProfilerBundle like this:

$twig->render('FancyProfilerBundle::profiler.html.twig');

This only works if you use Symfony, of course. If you want to use the FancyProfiler in a different framework, the current best practice is to extract the framework-agnostic PHP code into a separate package (the FancyProfiler “library”) and put everything else into “plugins”, “modules” and “bundles” tied to the chosen framework. This leads to several problems:

  • You need to duplicate many resource files: images, CSS files or translation catalogs hardly depend on one single framework. If you use a widespread templating engine like Twig, then even your templates will be very similar across frameworks.
  • You need to maintain many packages: The core library plus one package per supported framework. That’s a lot of maintenance work.

Wouldn’t it be nice if this could be simplified?

Puli

One and a half years ago I talked about this problem with Beau Simensen and several others at PHP-FIG. I wrote a blog post about The Power of Uniform Resource Location in PHP. Many people joined the discussion. The understanding of the problem and its solution got riper as we spoke.

Today, I am glad to present to you the first (and probably last) alpha version of Puli, a framework-agnostic resource manager for PHP. Puli manages resources in a repository that looks similar to a UNIX file system: You map files and directories to paths in the repository and use the same paths (we’ll call them Puli paths) to find the files again.

The mapping is done in a puli.json file in the root of your project or package:

{
    "resources": {
        "/app": "res"
    }
}

In this example, the Puli path /app is mapped to the directory res in your project. The repository can be dumped as PHP file with the Puli Command-Line Interface (CLI):

$ puli dump

Use the repository returned from the generated file to access your resources:

$repo = require __DIR__.'/.puli/resource-repository.php';
 
// res/views/index.html.twig
echo $repo->get('/app/views/index.html.twig')->getContents();

Composer Integration

That alone is nice, but not highly useful. However, Puli supports a Composer plugin that loads the puli.json files of all loaded Composer packages. Let’s take the puli.json in the fictional “webmozart/fancy-profiler” package again for example:

{
    "resources": {
        "/webmozart/fancy-profiler": "res"
    }
}

By convention, Puli paths in reusable Composer packages use the vendor and package names as top-level directories. This way it is easy to know where a Puli path belongs. Let’s dump the repository again and list the contained files:

$ puli dump
$ puli list -r /webmozart/fancy-profiler
/webmozart/fancy-profiler/views
/webmozart/fancy-profiler/views/index.html.twig
/webmozart/fancy-profiler/views/layout.html.twig
...

Both in the application and the profiler package, we can access the package’s resources through the repository:

// fancy-profiler/res/views/index.html.twig
echo $repo->get('/webmozart/fancy-profiler/views/index.html.twig')->getContents();

Tool Integration

I think this is quite exciting already, but it gets better once you integrate Puli with your favorite framework or tool. There already is a working Twig Extension which supports Puli paths in Twig templates:

{% extends '/app/views/layout.html.twig' %}
 
{% block content %}
    {# ... #}
{% endblock %}

You can also use relative Puli paths:

{% extends '../layout.html.twig' %}

The Symfony Bridge integrates Puli into the Symfony Config component. With that, you can reference configuration files by their Puli paths:

# routing_dev.yml
_wdt:
    resource: /symfony/web-profiler-bundle/config/routing/wdt.xml
    prefix:   /_wdt

The Symfony Bundle adds Puli support to a Symfony full-stack project. You can also start a new Symfony 2.5 project from the Symfony Puli Edition, if you like. An Assetic Extension is work-in-progress.

I focused on supporting the Symfony ecosystem for now because that is the one I know best, but Puli can, should and hopefully will be integrated into many more frameworks and tools. The Puli repository can be integrated into your favorite IDE so that you can browse and modify the repository without ever leaving your editor. There are countless possibilities.

Getting Started

Download the Puli Alpha version with Composer:

$ composer require puli/puli:~1.0

Make sure you set the “minimum-stability” option in your composer.json properly before running that command:

{
    ...,
    "minimum-stability": "alpha"
}

Beware that this is an alpha version, so some things may not work or change before the final release. Please do not use Puli in production.

Due to the limited scope of this post, I just scratched the top of Puli’s functionality here. Read Puli at a Glance to learn everything about what you can do with Puli. Read the very extensive documentation to learn how to use Puli. Head over to the issue tracker if you find bugs.

And of course, please leave a comment here :) I think Puli will significantly change the way we use and share packages. What do you think?

Favicon for ircmaxell's blog 16:00 What About Garbage? » Post from ircmaxell's blog Visit off-site link
If you've been following the news, you'll have noticed that yesterday Composer got a bit of a speed boost. And by "bit of a speed boost", we're talking between 50% and 90% reduction in runtime depending on the complexity of the dependencies. But how did the fix work? And should you make the same sort of change to your projects? For those of you who want the TL/DR answer: the answer is no you shouldn't.

Read more »
Ircmaxell?i=ung6T-Q4oes:9gR0LSsqkfk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=ung6T-Q4oes:9gR0LSsqkfk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 02 December, 2014

Favicon for ircmaxell's blog 16:00 A Point On MVC And Architecture » Post from ircmaxell's blog Visit off-site link
Last week I published a post called Alternatives To MVC. In it, I described some alternatives to MVC and why they all suck as application architectures (or more specifically, are not application architectures). I left a pretty big teaser at the end towards a next post. Well, I'm still working on it. It's a lot bigger job than I realized. But I did want to make a comment on a comment that was left on the last post.
Read more »
Ircmaxell?i=oTckdZLv07M:Qxx6QX8zBbI:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=oTckdZLv07M:Qxx6QX8zBbI:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 30 November, 2014

Favicon for Grumpy Gamer 22:21 Translations Achieved! » Post from Grumpy Gamer Visit off-site link

stretch_goals3.png

News stories from Friday 28 November, 2014

Favicon for ircmaxell's blog 16:00 It's All About Time » Post from ircmaxell's blog Visit off-site link
An interesting pull request has been opened against PHP to make bin2hex() constant time. This has lead to some interesting discussion on the mailing list (which even got me to reply :-X). There has been pretty good coverage over remote timing attacks in PHP, but they talk about string comparison. I'd like to talk about other types of timing attacks.

Read more »
Ircmaxell?i=GAYuW6ka_gc:QgaRLvaBeag:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=GAYuW6ka_gc:QgaRLvaBeag:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 24 November, 2014

Favicon for ircmaxell's blog 19:00 Alternatives To MVC » Post from ircmaxell's blog Visit off-site link
Last week, I wrote A Beginner's Guide To MVC For The Web. In it, I described some of the problems with both the MVC pattern and the conceptual "MVC" that frameworks use. But what I didn't do is describe better ways. I didn't describe any of the alternatives. So let's do that. Let's talk about some of the alternatives to MVC...

Read more »
Ircmaxell?i=y2ZZG6crFGI:GbDBUUiismA:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=y2ZZG6crFGI:GbDBUUiismA:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Saturday 22 November, 2014

Favicon for Grumpy Gamer 23:26 Stretch Goals » Post from Grumpy Gamer Visit off-site link

We just announced stretch goals for Thimbleweed Park.

"What the hell is Thimbleweed Park?", I can hear you asking.

It's a Kickstarter for Gary Winnick and my all new classic point & click adventure game.

Now I hear you saying "What the hell are stretch goals?"

Look, there is way too much to explain, just roll with it and go back Thimbleweed Park.

stretch_goals2.png

News stories from Friday 21 November, 2014

Favicon for ircmaxell's blog 18:30 A Beginner's Guide To MVC For The Web » Post from ircmaxell's blog Visit off-site link
There are a bunch of guides out there that claim to be a guide to MVC. It's almost like writing your own framework in that it's "one of those things" that everyone does. I realized that I never wrote my "beginners guide to MVC". So I've decided to do exactly that. Here's my "beginners guide to MVC for the web":

Read more »
Ircmaxell?i=_EsHGVbwovk:Mp9j61Ontsc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=_EsHGVbwovk:Mp9j61Ontsc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 18 November, 2014

Favicon for Grumpy Gamer 16:24 Please Join Us On Kickstarter » Post from Grumpy Gamer Visit off-site link

I'm going to keep this short.

Several months ago, Gary Winnick and I were sitting around talking about Maniac Mansion, old-school point & click adventures, how much fun we had making them and how amazing it was to be at Lucasfilm Games during that era.  We chatted about the charm, simplicity and innocence of the classic graphic adventure games.

We had to call them "Graphic Adventures" because text adventures were still extremely popular. It was a time of innovation and taking risks.

"Wouldn't it be fun to make one of those again?", Gary said.

"Yeah", I replied as a small tear forming in the corner of my eye*.

A few seconds later I said "Let's do a Kickstarter!".

After a long pause, Gary said  "OK".

We immediately started building the world and the story, layering in the backbone puzzles and forming characters around them.  From the beginning, we knew we wanted to make something that was a satire of Twin Peaks, X-Files and True Detective.  It was ripe with flavor and plenty of things to poke fun at.

So we're doing an Kickstarter for an all new classic point & click adventure game called "Thimbleweed Park". It will be like opening a dusty old desk drawer and finding an undiscovered Lucasfilm graphic adventure game you’ve never played before. Good times for all.

Please join us on Kickstarter!

thimbleweed_drawer.jpg

* The small tear in Ron's eye was added by the author for dramatic effect. No tear actually formed.

News stories from Sunday 16 November, 2014

News stories from Saturday 15 November, 2014

News stories from Friday 14 November, 2014

News stories from Thursday 13 November, 2014

News stories from Wednesday 12 November, 2014

Favicon for Fabien Potencier 00:00 PHP CS Fixer finally reaches version 1.0 » Post from Fabien Potencier Visit off-site link

A few years ago, I wrote a small script to automatically fix some common coding standard mistakes people made in Symfony pull requests. It was after I got bored about all the comments people made on pull requests to ask contributors to fix their coding standards. As humans, we have much better things to do! The tool helped me fix the coding standard issues after merging pull requests and keep the whole code base sane. It was a manual process I did on a regular basis but it did the job.

After a while, I decided to Open-Source the tool, like I do with almost all the code I write. I was aware of the limitation of the tool, the code was very rudimentary, but as Reid Hoffman said once: "If you are not embarrassed by the first version of your product, you've launched too late." To my surprise, people started to use it on their own code, found bugs, found edge cases, added more fixers, and soon enough, we all realise that using regular expressions for such things is doomed to fail.

Using the PHP tokens to fix coding standards is of course a much better approach, but every time I sat down to rewrite the tool, I got distracted by something that was more pressing. So, the tool stagnated for a while. The only real progress for Symfony was the introduction of fabbot.io which alerts contributors of coding standard issues before I merge the code.

The current stable version of PHP-CS-Fixer was released in August 2014 and it is still based on regular expressions, two years after the first public release. But in the last three months, things got crazy mainly because of Dariusz Ruminski. He did a great job at rewriting everything on top of a parser based on the PHP tokens, helped by 21 other contributors. After 13,000 additions and 5,000 deletions, I'm very proud to announce version 1.0 of PHP-CS-Fixer; it is smarter, it is more robust, and it has more fixers. Any downsides? Yes, speed; the tool is much slower, but it is worth it and enabling the new cache layer helps a lot.

As I learned today on Twitter, a lot of people rely on the PHP CS Fixer on a day to day basis to keep their code clean, and that makes me super happy. You can use the fixer from some IDEs like PhpStorm, NetBeans, or Sublime. You can install it via Composer, a phar, homebrew, or even Grunt. And there is even a Docker image for it!

News stories from Thursday 06 November, 2014

Favicon for Fabien Potencier 00:00 About Personal Github Accounts » Post from Fabien Potencier Visit off-site link

Many of you have a user account on Github. But what are you using it for? As far as Open-Source is concerned, I'm using mine for two different usages:

  • as a way to contribute to other projects by forking repositories and making pull-requests;

  • as a way to host some of my Open-Source projects.

But the more I think about it, the more I think the second usage is most the time wrong. If you are publishing a small snippet of code, a small demo, the code for a tutorial you wrote on your blog, that makes a lot of sense. But when it comes to useful and/or popular Open-Source projects, I think that is a mistake.

An Open-Source project should not be tied too much to its creator; the creator just happens to be the first contributor. And for many projects, it will stay that way for a very long time, which is fine. But gradually, as more people contribute, it can confuse some users. The license you choose helps a lot and the way you are responding to pull requests and issues is also a great way to show you openness. But that's not enough in the long term. Of course, understanding when it becomes a problem is up to you and definitely not easy. Here are some of my thoughts about some problems I identified in the past.

First, it makes the original developer special and not aligned with how others contribute; you cannot for instance fork the project to make a pull request (with a not-so-nice side-effect of Packagist publishing your branches, which is obviously wrong.)

Then, bringing awareness through a well established organization is probably easier than promoting yourself; it makes your project more easily discoverable.

Also, what if someone starts to contribute more than you? What if you are not interested in maintaining the project anymore? Github makes it very easy to transfer a project to another person, but organizations are almost always a better way in that case.

And I'm not even talking the bus factor.

As you might have guessed by now, Github organizations is the solution. A organization fixes all the problems and then some more; and creating one is very easy. Again, that only makes sense when your project is somewhat successful, and probably even more interesting if you have more than one such projects.

A while ago, I decided to do that for Silex and I moved it to its own organization. And I did the same for Twig recently for the same reasons. For those projects, it made sense to create a dedicated organization because there is more than one repositories; we moved along some related repositories (like the Silex skeleton or the Twig extensions).

Organizations are also a great way to create a group of people working on related topics (like FriendsOfSymfony) or people working with the same standards (The League of Extraordinary Packages).

Last year, I co-created such an organization: FriendsOfPhp. A couple of weeks ago, I moved the PHP security advisories database from the sensiolabs organization to the FriendsOfPhp one and I explained my motivations in a blog post.

Today, I'm doing the same with several of my projects that were previously part of my personal Github account. I have not created an organization per project because they are either too small or they don't need more than one repository; so they would not benefit from a standalone organization.

  • Sismo: A Continuous Testing Server
  • Sami: An API documentation generator
  • PHP-CS-Fixer: A script that fixes Coding Standards
  • Goutte: A simple PHP Web Scraper

If you cloned one of these repositories in the past, you can easily switch to the new Git URL via the following command:

$ git remote set-url origin https://github.com/FriendsOfPhp/XXX.git

News stories from Friday 31 October, 2014

Favicon for ircmaxell's blog 17:00 A Lesson In Security » Post from ircmaxell's blog Visit off-site link
Recently, a severe SQL Injection vulnerability was found in Drupal 7. It was fixed immediately (and correctly), but there was a problem. Attackers made automated scripts to attack unpatched sites. Within hours of the release of the vulnerability fix, sites were being compromised. And when I say compromised, I'm talking remote code execution, backdoors, the lot. Why? Like any attack, it's a chain of issues, that independently aren't as bad, but add up to bad news. Let's talk about them: What went wrong? What went right? And what could have happened better? There's a lesson that every developer needs to learn in here.

Read more »
Ircmaxell?i=fuljT9eLYfs:YQq5-H-MgMQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=fuljT9eLYfs:YQq5-H-MgMQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Wednesday 29 October, 2014

Favicon for ircmaxell's blog 17:00 Foundations Of OO Design » Post from ircmaxell's blog Visit off-site link
It's quite easy to mix up terminology and talk about making "easy" systems and "simple" ones. But in reality, they are completely different measures, and how we design and architect systems will depend strongly on our goals. By differentiating Simple from Easy, Complex from Hard, we can start to talk about the tradeoffs that designs can give us. And we can then start making better designs.

Read more »
Ircmaxell?i=9jdcttxsYfo:eLCGFPttXGE:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=9jdcttxsYfo:eLCGFPttXGE:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 27 October, 2014

Favicon for ircmaxell's blog 17:00 You're Doing Agile Wrong » Post from ircmaxell's blog Visit off-site link
To some of you, this may not be new. But to many of the people preaching "Agile Software Development", Agile is not what you think it is. Let me say that again, because it's important: You're Doing Agile Wrong.

Read more »
Ircmaxell?i=yBjjLhcCzZ0:_KCADex35Q8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=yBjjLhcCzZ0:_KCADex35Q8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 26 October, 2014

Favicon for Devexp 01:00 Use unsupported Jenkins plugins with Jenkins DSL » Post from Devexp Visit off-site link

In a previous post I wrote about how to Automate Jenkins with the use of the plugin Job DSL Plugin. If you didn’t read it, I highly suggest you do that as it will help you understand better what I’ll be explaining here.

When you start using the Job DSL Plugin you’ll probably sooner or later need to configure your job with a plugin that is not yet supported. And by “not yet supported” I mean that there aren’t (yet) DSL commands that will generate a job for that specific plugin. Fortunately they provide you with a way to add them ‘manually’ through the Configure Block.

This part is a bit more complex than using simply the DSL commands, because you’ll have to understand how it works. Now you did notice I wrote “a bit” … that’s because it seems complex, but in fact it isn’t. The only thing you need to know is that the plugin will, with the DSL commands, generate the config.xml of your job containing the full configuration of the job.

To have an idea, this is the config.xml of an empty job

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties />
  <scm class="hudson.scm.NullSCM"/>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders/>
  <publishers/>
  <buildWrappers/>
</project>

Let’s see an example of a basic DSL command and the corresponding config.xml.

job {
    name 'Test Job'
    description 'A Test Job'
}
<?xml version='1.0' encoding='UTF-8'?>
<project>
  ...
  <description>A Test Job</description>
  ...
</project>

So you see that every DSL command will generate some part in the config.xml.

Knowing this you’ll understand that we will have to study the config.xml of an existing job to see how the “unsupported” plugin is configured in the config.xml.

Let’s make it a bit more fun by integrating the HipChat Plugin. I created a simple job in jenkins and opened the config.xml file. (I assume you know how to install and configure the plugin in Jenkins)

<?xml version='1.0' encoding='UTF-8'?>
<project>
  <actions/>
  <description></description>
  <keepDependencies>false</keepDependencies>
  <properties>
    <jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty plugin="hipchat@0.1.4">
      <room></room>
      <startNotification>false</startNotification>
    </jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty>
  </properties>
  <scm class="hudson.scm.NullSCM"/>
  <canRoam>true</canRoam>
  <disabled>false</disabled>
  <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
  <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
  <triggers/>
  <concurrentBuild>false</concurrentBuild>
  <builders/>
  <publishers>
    <jenkins.plugins.hipchat.HipChatNotifier plugin="hipchat@0.1.4">
      <jenkinsUrl>http://jenkins/</jenkinsUrl>
      <authToken>ABCDEFGHIJKLMNOPQRSTUVWXYZ</authToken>
      <room>76124</room>
    </jenkins.plugins.hipchat.HipChatNotifier>
  </publishers>
  <buildWrappers/>
</project>

The values in the publisher section are being copied from the jenkins administration. That’s a bit annoying because it means you’ll have to expose that in the DSL scripting. At the moment of this writing, I didn’t find a way to configure that as variables.

Looking at the config.xml, we see that 2 nodes were modified, the properties and the publishers node. Both are children from the root project node. With the Configure Block we can obtain the XML Node to manipulate the DOM.

Get hold on the project node:

job {
  configure { project ->
    // project represents the node <project>
  }
}

Now that we can manipulate the project node, let’s add the properties node:

job {
  configure { project ->
      
    project / 'properties' << 'jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty' {
      room ''
      startNotification false
    }

  }
}

What we did here is tell the parser to append («) the block ‘jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty’ to the node project/properties. And finally in the block we simply enumerate the parameters as key[space]value as you can see them in the config.xml.

Hint 1: Do not specify the plugin version plugin=”hipchat@0.1.4″ otherwise it doesn’t work.
Hint 2: I append the properties (and below the publishers), because there will/can be others configured through other DSL blocks.

Let’s do the same now for the publishers part:

job {
  configure { project ->
      
    project / 'publishers' << 'jenkins.plugins.hipchat.HipChatNotifier' {
      jenkinsUrl 'http://jenkins/'
      authToken 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
      room '76124'
    }
  }
}

As with the properties, we tell the parser to append («) ‘jenkins.plugins.hipchat.HipChatNotifier’ (without the plugin version) and enumerate the parameters.

Following is the full DSL for adding HipChat Plugin support:

job {
  name "Job with HipChat"
  
  configure { project ->
      
    project / 'properties' << 'jenkins.plugins.hipchat.HipChatNotifier_-HipChatJobProperty' {
      room ''
      startNotification false
    }

    project / 'publishers' << 'jenkins.plugins.hipchat.HipChatNotifier' {
      jenkinsUrl 'http://jenkins/'
      authToken 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
      room '76124'
    }
  }
}

Once you grasp the Configure block, you’ll be able to generate any job you want. The example below uses the configure block to add a missing functionality in an existing predefined GIT DSL:

job {
  scm {
    git {
      remote { 
        url("")
      }
      branch("refs/heads/${branch}")
      configure { node -> //the GitSCM node is passed in
        
        // Add the CleanBeforeCheckout functionality
        node / 'extensions' << 'hudson.plugins.git.extensions.impl.CleanBeforeCheckout'  {
        }
        
        // Add the BitbucketWeb
        node / browser (class: 'hudson.plugins.git.browser.BitbucketWeb') {
          url 'https://bitbucket.org/my-account/my-project/'
        }
      }
    }
  }
}

A handy tool to play with (or test) the generation of your DSL is http://job-dsl.herokuapp.com/. It will prevent your from constantly running your DSL and open the config.xml from your jenkins to see if the xml is generated correctly!

Although the Configure block is really awesome, it doesn’t beat the predefined DSL commands, so if you have the time I suggest to contribute to the project by making it a predefined DSL :) https://github.com/jenkinsci/job-dsl-plugin/blob/master/CONTRIBUTING.md

If you have some other great Configure Block example, share them in the comments :)

News stories from Saturday 25 October, 2014

Favicon for Fabien Potencier 23:00 The PHP Security Advisories Database » Post from Fabien Potencier Visit off-site link

A year and a half ago, I was very proud to announce a new initiative to create a database of known security vulnerabilities for projects using Composer. It has been a great success so far; many people extended the database with their own advisories. As of today, we have vulnerabilities for Doctrine, DomPdf, Laravel, SabreDav, Swiftmailer, Twig, Yii, Zend Framework, and of course Symfony (we also have entries for some Symfony bundles like UserBundle, RestBundle, and JsTranslationBundle.)

The security checker is now included by default in all new Symfony project via sensiolabs/SensioDistributionBundle; checking vulnerabilities is as easy as it can get:

$ ./app/console security:check

If you are not using Symfony, you can easily use the web interface, the command line tool, or the HTTP API. And of course, you are free to build your own tool, based on the advisories stored in the "database".

Today, I've decided to get one step further and to clarify my intent with this database: I don't want the database to be controlled by me or SensioLabs, I want to help people find libraries they must upgrade now. That's the reason why I've added a LICENSE for the database, which is now into the public domain.

Also, even if I've been managing this database since the beginning with only good intentions, it is important that the data are not controlled by just one person. We need one centralized repository for all PHP libraries, but a distributed responsibility. As this repository is a good starting point, I've decided to move the repository from the SensioLabs organization to the FriendsOfPHP organization.

I hope that these changes will help the broader PHP community. So, who wants to help?

News stories from Friday 24 October, 2014

Favicon for ircmaxell's blog 17:00 What's In A Type » Post from ircmaxell's blog Visit off-site link
There has been a lot of talk about typing in PHP lately. There are a couple of popular proposals for how to clean up PHP's APIs to be simpler. Most of them involve changing PHP's type system at a very fundamental level. So I thought it would be a good idea to talk about that. What goes into a type?

Read more »
Ircmaxell?i=UI_SBxeaXZ0:3Abs-VJU0Tk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=UI_SBxeaXZ0:3Abs-VJU0Tk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs
Favicon for Web Mozarts 10:48 Defining PHP Annotations in XML » Post from Web Mozarts Visit off-site link

Annotations have become a popular mechanism in PHP to add metadata to your source code in a simple fashion. Their benefits are clear: They are easy to write and simple to understand. Editors offer increasing support for auto-completing and auto-importing annotations. But there are also various counter-arguments: Annotations are written in documentation blocks, which may be removed from packaged code. Also, they are coupled to the source code. Whenever an annotation is changed, the project needs to be rebuilt. This is desirable in some, but not in other cases.

For these reasons, Symfony always committed to supporting annotations, XML and YAML at the same time – and with the same capabilities – to let our users choose whichever format is appropriate to configure the metadata of their projects. But could we take this one step further? Could we build, for example, XML support directly into the Doctrine annotation library?

Let’s start with a simple example of an annotated class:

namespace Acme\CRM;
 
use Doctrine\ORM\Mapping\Column;
use Doctrine\ORM\Mapping\Entity;
use Symfony\Component\Validator\Constraints\Length;
use Symfony\Component\Validator\Constraints\NotNull;
 
/**
 * @Entity
 */
class Address
{
    /**
     * @Column
     * @NotNull 
     * @Length(min=3)
     */
    private $street;
 
    /**
     * @Column(name="zip-code")
     * @NotNull
     */
    private $zipCode;
}

Right now, if toolkits (such as Doctrine ORM or Symfony Validation) want to support annotations and XML schemas, they have to write separate parsers that duplicate a lot of common code. Wouldn’t it be nice if they could use a generic parser instead?

Let’s try to map the annotations to a generic XML file:

<?xml version="1.0" encoding="UTF-8"?>
<class-mapping xmlns="http://doctrine-project.org/schemas/annotations/class-mapping"
    xmlns:orm="http://doctrine-project.org/schemas/orm"
    xmlns:val="http://symfony.com/schema/dic/validation/constraint-mapping"
    xmlns:prop="http://symfony.com/schema/dic/property-access/property-mapping">
 
<class name="Acme\CRM\Address">
    <orm:entity />
    <property name="street">
        <orm:column />
        <val:not-null />
        <val:length min="3" />
    </property>
    <property name="zipCode">
        <orm:column name="zip-code" />
        <val:not-null />
    </property>
    <method name="activate">
        <prop:setter name="active" />
    </method>
</class>
 
</class-mapping>

As you can see, this is more or less an abstraction of Doctrine’s XML Mapping. The base set of elements – <class-mapping>, <class>, <property> and <method> – is provided by the “http://doctrine-project.org/schemas/annotations/class-mapping” namespace and processed by AnnotationReader. The other namespaces are user-defined and processed by custom tag parsers. These turn tags into annotations for the currently processed element. Let’s load the annotations:

// analogous to the existing AnnotationRegistry::registerAutoloadNamespace()
AnnotationRegistry::registerXmlMappings('/path/to/xml-mappings');
AnnotationRegistry::registerXmlNamespace('http://doctrine-project.org/schemas/orm', function () {
    return new OrmTagParser();
});
// ...
 
$reader = new AnnotationReader();
 
// Inspects doc blocks and registered XML files
$annotations = $reader->getClassAnnotations(new \ReflectionClass('Acme\CRM\Address'));
// => array(object(Doctrine\ORM\Mapping\Entity))

Due to XML’s namespaces it’s possible to combine all the mappings in one file or spread them across multiple files, if desired. So, one file could contain the ORM mapping only:

<!-- ORM mapping -->
<?xml version="1.0" encoding="UTF-8"?>
<map:class-mapping xmlns="http://doctrine-project.org/schemas/orm"
    xmlns:map="http://doctrine-project.org/schemas/annotations/class-mapping">
 
<map:class name="Acme\CRM\Address">
    <entity />
    <map:property name="street">
        <column />
    </map:property>
    <map:property name="zipCode">
        <column name="zip-code" />
    </map:property>
</map:class>
 
</map:class-mapping>

And another one the validation constraint mapping:

<!-- Constraint mapping -->
<?xml version="1.0" encoding="UTF-8"?>
<map:class-mapping xmlns="http://symfony.com/schema/dic/2.7/validation/constraint-mapping"
    xmlns:map="http://doctrine-project.org/schemas/annotations/class-mapping">
 
<map:use class="Acme\CRM\Validation\ZipCode" />
 
<map:class name="Acme\CRM\Address">
    <map:property name="street">
        <not-null />
        <length min="3" />
    </map:property>
    <map:property name="zipCode">
        <not-null />
        <map:annotation class="ZipCode">
            <map:parameter name="strict">true</map:parameter>
        </map:annotation>
    </map:property>
</map:class>
 
</map:class-mapping>

The disadvantage is that custom tag parsers (such as OrmTagParser above) need to be registered before loading annotations. The last example, however, shows a generic (although verbose) way of using custom annotations without writing a custom XML schema and parser.

The advantages are clear: The mapping files are very concise, can be validated against their XML schemas and can be separated from the PHP code. If you want to use annotations, but your users demand support for XML, it’s very easy to write an XML schema and a tag parser for your annotations and plug it in. And at last, the class metadata configuration of different toolkits (Symfony and Doctrine in the above example) can be combined in just one file for small projects.

The above concept certainly has room for improvement: As it is right now, all XML files need to be located and parsed even when the annotations of just one class are loaded. Then again, I think that annotations shouldn’t be parsed on every request anyway. If a toolkit parses annotations with the annotation reader, it should, in my opinion, cache the result somewhere or generate optimized PHP code to speed up subsequent page loads.

It would also be nice to provide a similar, unified annotation definition language for the YAML format. Since YAML doesn’t natively support namespaces – as XML does – this is a bit more tricky.

What do you think? Are you interested in using or implementing such a feature?

News stories from Wednesday 22 October, 2014

Favicon for ircmaxell's blog 17:00 When Rocks Falter » Post from ircmaxell's blog Visit off-site link
I've never been a rock. I'm about as passionate as someone can be when I choose to do something. Unfortunately that means I tend to throw myself (my raw unadulterated self) at my interests. It's just who I am and who I've always been. This has positives and negatives associated with it (especially from a personal perspective).

Throwing yourself at a passion has enormous benefits. You get a lot done, you can truly touch people's lives. You can really change the world. But you also take on a lot of risk. Putting yourself out there is the easiest way to get burned. When you're passionate, it's hard to not take things emotionally. It's hard to not care. After all, caring is where you draw your power from.

I have always been held up by those that I knew were rocks. I always leaned on people who I know weren't just abiding a flight-of-fancy, but who could wear the tide. But what happens when you start to see those who you thought were rocks, falter...?

Read more »
Ircmaxell?i=4vIhKNFZia4:ayBDokKWaTg:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=4vIhKNFZia4:ayBDokKWaTg:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 20 October, 2014

Favicon for ircmaxell's blog 17:00 Educate, Don't Mediate » Post from ircmaxell's blog Visit off-site link
Recently, there has been a spout of attention about how to deal with eval(base64_decode("blah")); style attacks. A number of posts about "The Dreaded eval(base64_decode()) - And how to protect your site and visitors" have appeared lately. They have been suggesting how to mitigate the attacks. This is downright bad.
Read more »
Ircmaxell?i=iTE9RjdE1WY:_qU6l7bpBDk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=iTE9RjdE1WY:_qU6l7bpBDk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Saturday 18 October, 2014

Favicon for Grumpy Gamer 17:59 Blah Blah Blah » Post from Grumpy Gamer Visit off-site link

ron_talking.gif

Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah Blah Blah.

Blah Blah Blah Blah,  Blah Blah Blah,  Blah Blah Blah Blah.  Blah Blah Blah Blah Blah Blah Blah.  Blah Blah Blah Blah,  Blah Blah Blah Blah Blah Blah Blah Blah Blah.  Blah Blah!!!

Blah,  Blah Blah Blah Blah,  Blah Blah Blah Blah Blah.  Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah,  Blah Blah Blah Blah Blah Blah Blah?  Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah.

Blah Blah Blah Blah Blah Blah Blah Blah Blah, Blah Blah Blah Blah, Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah Blah Blah, Blah Blah Blah Blah Blah, Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah Blah. Blah Blah Blah Blah?

Blah Blah Blah Blah Blah Blah. Blah Blah Blah!

Blah.

News stories from Friday 17 October, 2014

Favicon for ircmaxell's blog 12:00 A Followup To An Open Letter To PHP-FIG » Post from ircmaxell's blog Visit off-site link
A few days ago, I wrote An Open Letter to PHP-FIG. Largely the feedback on it was positive, but not all. So I feel like I do have a few more things to say.

What follows is a collection of followups to specific points of contention raised about my post. I'm going to ignore the politics and any non-technical discussion here.

Read more »
Ircmaxell?i=JZq4tVzqHmU:6nScKqZDaVk:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=JZq4tVzqHmU:6nScKqZDaVk:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Wednesday 15 October, 2014

Favicon for ircmaxell's blog 12:00 An Open Letter To PHP-FIG » Post from ircmaxell's blog Visit off-site link
Dear PHP-FIG,

Please stop trying to solve generic problems. Solve the 50% problem, not the 99% problem.

Signed,

Anthony

PS:

...
Read more »
Ircmaxell?i=gFHW_1_Dnow:cqeAEYSpSQM:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=gFHW_1_Dnow:cqeAEYSpSQM:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 13 October, 2014

Favicon for ircmaxell's blog 17:00 FUD and Flames And Trolls, Oh My! » Post from ircmaxell's blog Visit off-site link
Last weekend I gave the opening keynote at PHPNW14. The talk was recorded, and no, the video isn't online yet. The basis of the talk was centered around community and how we can come together (and how we are drifting apart). But there was one point that I mentioned that I think requires further thought and discussion. And that point is that there is far less trolling going on than it may seem at first glance.
Read more »
Ircmaxell?i=VLSPS5pjSCQ:R8-yYIeL5Hc:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=VLSPS5pjSCQ:R8-yYIeL5Hc:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 23 September, 2014

Favicon for Devexp 01:00 Automate Jenkins » Post from Devexp Visit off-site link

jenkins is cool

Jenkins is a powerful continuous integration server which has been around for some time now. I’ve been personally using it for years and it never let me down.

However, there will come a time where adding/updating/removing jobs will have an impact on your internal processes. Take for example feature branches. Logically you will (and should) test them, so you will start making new jobs for each feature, and once they are done, you will remove them. Sure using the duplicate helps a lot, but it’s yet another (manual) thing on the todo list of a developer. Or it could even be that you don’t allow (junior) developers to administrate the jenkins, and thus making it the job of the “jenkins manager”.

If you are facing such situation then you will embrace the Job DSL Plugin.

The Job DSL Plugin allows you to generate jobs through some simple Groovy DSL scripting. I’m not going to explain here how everything works, because the wiki of the plugin does a very good job at that, but instead you’ll find some DSL scripts which I’m currently using on a project. I do however suggest reading the wiki first in order to fully grasp the meaning of the following examples.

Generate jobs based on subversion branches

This following DSL will create a job based on the branches it finds on a subversion repository.

svnCommand = "svn list --xml svn://url_path/branches"
def proc = svnCommand.execute()
proc.waitFor()
def xmlOutput = proc.in.text

def lists = new XmlSlurper().parseText(xmlOutput)

def listOfBranches = lists.list.entry.name

println "Start making jobs ..."

// iterate through branches
listOfBranches.each(){
  def branchName = it.text()

  println "- found branch '${branchName}'"

  job {
    name("branch-${branchName}")
    scm {
        svn("svn://url_path/branches/${branchName}")
    }
    triggers {
      scm('H/5 * * * *')
    }
    steps {
      maven("-U clean verify","pom.xml")
    }
  }
}

Generate jobs based on a static list

If you have several libraries which need to be configured exactly in the same way, you could also make use of a static list.

def systems = ["linux","windows","osx"];

// Configure the jobs for each system
systems.each() {

def system = it

job(type: Maven) {
  name("mylibrary-${system}")
    scm {
      git {
        remote {
          url("git@bitbucket.org:some_account/mylibrary-${system}.git")
        }
      }
    }
    goals("-U clean deploy")
  }
};

As you can see you can achieve some very powerful automations with the Job DSL Plugin.

They already support a lot of plugins but in case the one you use is not (yet) supported it is always possible to configure it through Configure Blocks. I had to do that for the HipChat Plugin which I will explain in detail in a following blog post.

Hope this convinced you to stop creating, editing and removing jobs manually and start doing all that automatically 😉

News stories from Wednesday 27 August, 2014

Favicon for Grumpy Gamer 15:57 My Understanding Of Charts » Post from Grumpy Gamer Visit off-site link

understanding_of_charts.jpg

News stories from Tuesday 26 August, 2014

Favicon for #openttdcoop 11:01 Server Changes » Post from #openttdcoop Visit off-site link

As one of the sysadmins of #openttdcoop alot of work happens for me on the background. Most changes go unnoticed, some cause minor breakdowns (sorry ;)) but alot of changes you don’t see. The changes that mostly did go unnoticed were changes to our mail infrastructure, database updates, backup procedures. And that’s just a few.

Today one of the changes that you will see is a change to our paste service. The paste service as it currently is has changed. We have switched to a new backend which was needed. The old pastes are NOT deleted. They can still be reached at http://old-paste.openttdcoop.org. However do keep in mind that this will go offline at some point and we strongly advice against creating new pastes there.

Our new backend is currently live at https://paste.openttdcoop.org. In this case we are using sticky-notes as a backend. This gives you more privacy and options compared to the old paste. We do hope the new features help everyone out even us as admins in maintaining it all.

Another change that already is active (and you might not always notice) is a replacement we did for our bundles server. This had to happen at some point. And today it is done. This change won’t have much of an impact. But we hope to improve response times with this new server.

These are just a few of the changes you’re going to see. More will follow at some point but this is just a start 😉

Should you have any questions join in on IRC (#openttdcoop @ OFTC or through http://irc.openttdcoop.org)

News stories from Sunday 10 August, 2014

Favicon for Grumpy Gamer 18:03 Puzzle Dependency Charts » Post from Grumpy Gamer Visit off-site link

In part 1 of 1 in my series of articles on games design, let’s delve into one of the (if not THE) most useful tool for designing adventure games: The Puzzle Dependency Chart. Don’t confuse it with a flow chart, it’s not a flow chart and the subtle distinctions will hopefully become clear, for they are the key to it’s usefulness and raw pulsing design power.

There is some dispute in Lucasfilm Games circles over whether they were called Puzzle Dependency Charts or Puzzle Dependency Graphs, and on any given day I'll swear with complete conviction that is was Chart, then the next day swear with complete conviction that it was Graph. For this article, I'm going to go with Chart. It's Sunday.

Gary and I didn’t have Puzzle Dependency Charts for Maniac Mansion, and in a lot of ways it really shows. The game is full of dead end puzzles and the flow is uneven and gets bottlenecked too much.

Puzzle Dependency Charts would have solve most of these problems. I can’t remember when I first came up with the concept, it was probably right before or during the development of The Last Crusade adventure game and both David Fox and Noah Falstein contributed heavy to what they would become. They reached their full potential during Monkey Island where I relied on them for every aspect of the puzzle design.

A Puzzle Dependency Chart is a list of all the puzzles and steps for solving a puzzle in an adventure game. They are presented in the form of a Graph with each node connecting to the puzzle or puzzle steps that are need to get there.  They do not generally include story beats unless they are critical to solving a puzzle.

Let’s build one!

gg_pdc_1.jpg

I always work backwards when designing an adventure game, not from the very end of the game, but from the end of puzzle chains.  I usually start with “The player needs to get into the basement”, not “Where should I hide a key to get into some place I haven’t figured out yet.”

I also like to work from left to right, other people like going top to bottom. My rational for left to right is I like to put them up on my office wall, wrapping the room with the game design.

So... first, we’ll need figure out what you need to get into the basement...

gg_pdc_2.jpg

And we then draw a line connecting the two, showing the dependency. “Unlocking the door” is dependent on “Finding the Key”.  Again, it’s not flow, it’s dependency.

Now let’s add a new step to the puzzle called “Oil Hinges” on the door and it can happen in parallel to the "Finding the Key" puzzle...

gg_pdc_3.jpg

We add two new puzzle nodes, one for the action “Oil Hinges” and it’s dependency “Find Oil Can”.  “Unlocking” the door is not dependent on “Oiling” the hinges, so there is no connection. They do connect into “Opening” the basement door since they both need to be done.

At this point, the chart is starting to get interesting and is showing us something important: The non-linearity of the design. There are two puzzles the player can be working on while trying to get the basement door open.

There is nothing (NOTHING!) worse than linear adventure games and these charts are a quick visual way to see where the design gets too linear or too unwieldy with choice (also bad).

Let's build it back a little more...

gg_pdc_6.jpg

When you step back and look at a finished Puzzle Dependency Chart, you should this kind of overall pattern with a lot of little sub-diamond shaped expansion and contraction of puzzles.  Solving one puzzle should open up 2 or 3 new ones, and then those collapses down (but not necessarily at the same rate) to a single solution that then opens up more non-linear puzzles.

gg_pdc_7.jpg

The game starts out with a simple choice, then the puzzles begin to expand out with more and more for the player to be doing in parallel, then collapse back in.

I tend to design adventures games in “acts”, where each act ends with a bottle neck to the next act. I like doing this because it gives players a sense of completion, and they can also file a bunch of knowledge away and (if need) the inventory can be culled).

gg_pdc_5.jpg

Monkey Island would have looked something like this...

gg_pdc_4.jpg

I don’t have the Puzzle Dependency Chart for Monkey Island, or I’d post it. I’ve seen some online, but they are more “flowcharts” and not “dependency charts”. I’ve had countless arguments with people over the differences and how dependency charts are not flowcharts, bla bla bla. They’re not. I don’t completely know why, but they are different.

Flowcharts are great if you’re trying to solve a game, dependency charts are great if you’re trying to design a game. That’s the best I can come up with.

Here is a page from my MI design notebook that shows a puzzle in the process of being created using Puzzle Dependency Charts. It’s the only way I know how to design an adventure game. I’d be lost without them.

MI2_puzzle1_small.jpg

So, how do you make these charts?

You'll need some software that automatically rebuilds the charts as you connect nodes. If you try and make these using a flowchart program, you’ll spend forever reordering the boxes and making sure lines don’t cross. It’s a frustrating and time consuming process and it gets in the way of using these as a quick tool for design.

Back at Lucasfilm Games, we used some software meant for project scheduling. I don’t remember the name of it, and I’m sure it’s long gone.

I’ve only modern program that does this well is OmniGraffle, but it only runs on the Mac. I’m sure there are others, but since OmniGraffle does exactly what I need, I haven’t look much deeper. I'm sure there are others.

OmniGraffle is built on top of the unix tool called graphviz. Graphviz is great, but you have to feed everything in as a text file. It’s a nerd level 8 program, but it’s what I used for DeathSpank.

You can take a look at the DeathSpank Puzzle Dependency Chart here, but I warn you, it's a big image, so get ready to zoom-n-scroll™. You can also see the graphviz file that produced it.

Hopefully this was interesting. I could spend all day long talking about Puzzle Dependency Charts. Yea, I'm a lot of fun on a first date.

News stories from Wednesday 06 August, 2014

Favicon for Grumpy Gamer 22:52 SCUMM Notes From The C64 » Post from Grumpy Gamer Visit off-site link

More crap that is quickly becoming a fire hazard. Some of my notes from building SCUMM on the C64 for Maniac Mansion.

gg_scumm_docs1_thumb.jpggg_scumm_docs2_thumb.jpggg_scumm_docs3_thumb.jpggg_scumm_docs4_thumb.jpggg_scumm_docs5_thumb.jpggg_scumm_docs6_thumb.jpggg_scumm_docs7_thumb.jpggg_scumm_docs8_thumb.jpggg_scumm_docs9_thumb.jpggg_scumm_docs10_thumb.jpggg_scumm_docs11_thumb.jpg

I'm not sure who's phone number that is on the last page. I'm afraid to call it.

News stories from Monday 04 August, 2014

Favicon for Fabien Potencier 23:00 Signing Project Releases » Post from Fabien Potencier Visit off-site link

About a year ago, I started to sign all my Open-Source project releases. I briefly mentioned it during my SymfonyCon keynote in Warsaw, but this post is going to give you some more details.

Whenever I release a new version of a project, I sign the Git tag with my PGP key: DD4E C589 15FF 888A 8A3D D898 EB8A A69A 566C 0795.

Checking Git Tag Signatures#

If you want to verify a specific release, you need to install PGP first, and then get my PGP key:

$ gpg --keyserver pgp.mit.edu --recv-keys 0xeb8aa69a566c0795

Then, use git tag to check the related tag. Here is how to check the Symfony 2.4.2 tag (from a Symfony clone):

$ git tag -v v2.4.2

Verification worked if the output contains the key used to sign the tag (566C0795) and contains a text starting with "Good signature from ...". Because of how Git works, having a good signature on a tag also means that all commits reachable from that tag are covered by this signature (that's why signing all commits/merges is not needed.)

You can see the PGP signature by using the following command:

$ git show --show-signature v2.4.2

For the curious ones, I'm going to take Symfony 2.4.2 as an example to explain how it works. First, Git does not sign the contents of a commit itself (which is empty anyway for tags), but its headers. Let's display the headers for the Symfony v2.4.2 tag:

$ git cat-file -p v2.4.2

You should get the following output:

object b70633f92ff71ef490af4c17e7ca3f3bf3d0f304
type commit
tag v2.4.2
tagger Fabien Potencier <fabien.potencier@gmail.com> 1392233223 +0100

created tag 2.4.2
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (Darwin)

iF4EABEIAAYFAlL7ywcACgkQ64qmmlZsB5W1cAEAtZOVz5OT7i8vAEiLqnMYyM5n
+XMbyTMVXyYfBqjqkmUA/AxAFTp7oTeHY3yepx/uuxF91+DOnvbxf4b2BqSCx0dq
=sv1G
-----END PGP SIGNATURE-----

The PGP signature is calculated on all lines up to the beginning of the signature:

object b70633f92ff71ef490af4c17e7ca3f3bf3d0f304
type commit
tag v2.4.2
tagger Fabien Potencier <fabien.potencier@gmail.com> 1392233223 +0100

created tag 2.4.2

You can try it by yourself by saving those lines in a test file, and create a test.sig file with the PGP signature:

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (Darwin)

iF4EABEIAAYFAlL7ywcACgkQ64qmmlZsB5W1cAEAtZOVz5OT7i8vAEiLqnMYyM5n
+XMbyTMVXyYfBqjqkmUA/AxAFTp7oTeHY3yepx/uuxF91+DOnvbxf4b2BqSCx0dq
=sv1G
-----END PGP SIGNATURE-----

Then, check that the signature matches the Git headers with the following command:

$ gpg --verify test.asc test

So, when signing a tag, you sign the commit sha1 (and so all reachable commits), but also the tag name (and so the version you expect to get).

Signing Github Archives#

That's great, but when using Composer, you can get the code either as a Git clone (--prefer-source) or as an archive (--prefer-dist). If Composer uses the latter, you cannot use the signature coming from the tag, so how can you check the validity of what Composer just downloaded?

Whenever I make a new release, I also publish a file containing a sha1 for the zip file as returned by the Github API (https://api.github.com/repos/XXX/XXX/zipball/VERSION) but also a sha1 calculated on the file contents from the zip (the exact same files installed by Composer.) Those files are hosted on a dedicated checksums repository on Github.

As an example, let's say I have a project using Symfony 2.4.2 (you can check the version installed by Composer by running composer show -i). The sha1s are available here: https://raw.githubusercontent.com/sensiolabs/checksums/master/symfony/symfony/v2.4.2.txt.

This file is signed, so you first need to verify it:

$ curl -O https://raw.githubusercontent.com/sensiolabs/checksums/master/symfony/symfony/v2.4.2.txt
$ gpg --verify v2.4.2.txt

Now, you can check the validity of the files downloaded and installed by Composer:

$ cd PATH/TO/vendor/symfony/symfony
$ find . -type f -print0 | xargs -0 shasum | shasum

The sha1 displayed should match the one from the file you've just downloaded (the one under the files_sha1 entry.)

To make it easier, you can also check all your dependencies via a simple script provided in the repository. From your project root directory (where the composer.json file is stored), run the following

$ PATH/TO/check-vendors.sh

It will output something along the lines of:

symfony/swiftmailer-bundle@v2.2.6                        OK  files signature
symfony/symfony@v2.5.2                                   KO  files signature
twig/extensions@v1.0.1                                   OK  files signature
twig/twig@v1.15.0                                        OK  files signature
white-october/pagerfanta-bundle@dev-master               --  unknown package
willdurand/hateoas@1.0.x-dev                             --  unknown package

 1 packages are potentially corrupted.
 Check that your did not add/modify/delete some files.

Consider the checksum feature as experimental and as such, any feedbacks would be much appreciated.

News stories from Sunday 03 August, 2014

Favicon for Grumpy Gamer 21:24 2D Point and Click Engine Recommendations » Post from Grumpy Gamer Visit off-site link

SCUMM_legacy.jpg

I’m looking for some good recommendations on modern 2D point-and-click adventure game engines. These should be complete engines, not just advice to use Lua or Pascal (it’s making a comeback). I want to look at the whole engine, not just the scripting language.  PC based is required. Mobile is a ok. HTML5 is not necessary. Screw JavaScript. Screw Lua too, but not as hard as JavaScript.

I’m not so much interested in using them, as I’d just like to dissect and deconstruct what the state of the art is today.

P.S. I don’t know why I hate Lua so much. I haven’t really used it other than hacking WoW UI mods, but there is something about the syntax that makes it feel like fingernails on a chalkboard.

P.P.S It's wonderful that "modern 2d point-and-click" isn't an oxymoron anymore.

P.P.P.S Big bonus points if you've actually used the engine. I do know how to use Google.

P.P.P.P.S I want engines that are made for adventure games, not general purpose game engines.

News stories from Thursday 24 July, 2014

Favicon for Grumpy Gamer 17:50 Best. Ending. Ever. » Post from Grumpy Gamer Visit off-site link

An email sent to me from LucasArts Marketing/Support letting me know they "finally" found some people who liked the ending to Monkey Island 2.

mi2-emails.jpg


Favicon for Joel on Software 01:14 Trello, Inc. » Post from Joel on Software Visit off-site link

Hello? is this thing on?

I’m not sure if I even know how to operate this “blog” device any more. It’s been a year since my last post. I’m retired from blogging, remember?

Want to hear something funny? The only way I can post blog posts is by remote-desktopping into a carefully preserved Windows 7 machine which we keep in a server closet running a bizarrely messed-up old copy of CityDesk which I somehow hacked together and which only runs on that particular machine. The shame!

I do need to fill you in on some exciting Trello News, though. As you no doubt know, Trello is the amazing visual project management system we developed at Fog Creek.

Let me catch you up. As legend has it, back in yonder days, early twenty-oh-eleven, we launched a modest little initiative at Fog Creek to try to come up with new product ideas. We peeled off eight developers. The idea was that they would break into four teams. Two people each. Each team would work for a few months building a prototype or MVP for some product idea. Hopefully, at least one of those ideas would turn into something people wanted.

One of those teams started working on the concept that became Trello. The idea seemed so good that we doubled that team to four developers. The more we played around with it, the better we liked it. Within nine months, it was starting to look good enough to go public with, so we launched Trello at TechCrunch Disrupt to great acclaim and immediately got our first batch of users.

Over the next three years, Trello showed some real traction. The team grew to about 18 people, almost all in engineering. We did iPhone, iPad, Android, and Web versions. And Kindle. Oh and Android Wear.  The user base grew steadily to 4.6 million people.

Zowie.

Here are some things that surprised me:

  • We’ve successfully made a non-developer product that actually appeals to civilians. We tried to avoid the software developer niche this time, and it worked. I think that’s because Trello is visual. The board / card metaphor makes every board instantly understandable, which seems to attract all types of users who traditionally had never found online project management to be useful or worth doing.
  • It spreads like crazy. It’s a gas that expands to fill all available space. Somebody finds out about Trello from their reading group and adopts it at work; pretty soon their whole company has hundreds of Trello boards for everything from high level road maps to a list of snacks that need to be bought for the break room.
  • People love it. We regularly monitor Twitter for mentions of Trello and the amount of positive sentiment out there is awe-inspiring.

We launched something called Trello Business Class, which, for a small fee, provides all kinds of advanced administration features so that the larger organizations using Trello can manage it better, and Hey Presto, Trello was making money!


Taco got big, too
In the meantime, we started getting calls from investors. “Can we invest in Trello?” they asked. They were starting to notice that whenever they looked around their portfolio companies all they saw was Trello boards everywhere.

We didn’t really need the money; Fog Creek is profitable and could afford to fund Trello development to profitability. And when we told the investors that they could take a minority, non-controlling stake in Fog Creek, we had to start explaining about our culture and our developer tools and our profit sharing plans and our free lunches and private offices and whatnot, and they got confused and said, “hmm, why don’t you keep all that, we just want to invest in Trello.”

Now, we didn’t need the money, but we certainly like money. We had a bunch of ideas for ways we could make Trello grow faster and do all kinds of astonishing new features and hire sales and marketing teams to work on Trello Business Class. We  would have gotten around to all that eventually, but not as quickly as we could with a bit of folding money.

Which lead to this fairly complicated plan. We spun out Trello into its own company, Trello Inc., and allowed outside investors to buy a minority stake in that. So now, Trello and Fog Creek are officially separate companies. Trello has a bunch of money in the bank to operate independently. Fog Creek will continue to build FogBugz and Kiln and continue to develop new products every once in a while. Michael Pryor, who co-founded Fog Creek with me in 2000, will be CEO of Trello.

So, yeah. This is the point at which old-time readers of this blog point out that the interest of VCs is not always aligned with the interest of founders, and VCs often screw up the companies they invest in.

That’s mostly true, but not universal. There are smart, founder-friendly VCs out there. And with Trello (and Stack Overflow, for that matter), we didn’t take any outside investment until we already had traction and revenue, so we could choose the investors that we thought were the most entrepreneur-friendly, and we kept control of the company.

In the case of Trello, we had so much interest from investors that we were even able to limit ourselves to investors who were already investors in Stack Exchange and still get the price and terms we wanted. The advantage of this is that we know them, they know us, and they’re aligned enough not to fret about any conflicts of interest which might arise between Stack Exchange and Trello because they have big stakes in both.

Both Index Ventures and Spark Capital will co-lead the investment in Trello, with Bijan Sabet from Spark joining our board. Bijan was an early investor in Twitter, Tumblr, and Foursquare which says a lot about the size of our ambitions for Trello. The other two members of the board are Michael and me.

Even though Fog Creek, Trello, and Stack Exchange are now three separate companies, they are all running basically the same operating system, based on the original microprocessor architecture known as “making a company where the best developers want to work,” or, in simpler terms, treating people well.

This operating system applies both to the physical layer (beautiful daylit private offices, allowing remote work, catered lunches, height-adjustable desks and Aeron chairs, and top-tier coffee), the application layer (health insurance where everything is paid for, liberal vacations, family-friendly policies, reasonable work hours), the presentation layer (clean and pragmatic programming practices, pushing decisions down to the team, hiring smart people and letting them get things done, and a commitment to inclusion and professional development), and mostly, the human layer, where no matter what we do, it’s guided first and foremost by obsession over being fair, humane, kind, and treating each other like family. (Did I tell you I got married?)

So, yeah, there are three companies here, with different products, but every company has a La Marzocco Linea espresso machine in every office, and every company gives you $500 when you or your partner has a baby to get food delivered, and when we’re trying to figure out how to manage people, our number one consideration is how to do so fairly and compassionately.

That architecture is all the stuff I spent ten years ranting on this blog about, but y’all don’t listen, so I’m just going to have to build company after company that runs my own wacky operating system, and eventually you’ll catch on. It’s OK to put people first. You don’t have to be a psychopath or work people to death or create heaps of messy code or work in noisy open offices.

Anyway, that’s the news from our neck of the woods. If the mission of Trello sounds exciting we’ll be hiring a bunch of people soon so please apply!

News stories from Monday 21 July, 2014

Favicon for Grumpy Gamer 16:08 Maniac Mansion Design Doc » Post from Grumpy Gamer Visit off-site link

Even more crap from my Seattle storage unit!

Here is the original pitch document Gary and I used for Maniac Mansion. Gary had done some quick concepts, but we didn't have a real design, screen shots or any code. This was before I realized coding the whole game in 6502 was nuts and began working on the SCUMM system.

There was no official pitch process or "green lighting" at Lucasfilm Games. The main purpose of this document would have been to pass around to the other members of the games group and get feedback and build excitement.

I don't remember a point where the game was "OK'd".  It felt that Gary and I just started working on it and assumed we could.  It was just the two of us for a long time, so it's not like we were using up company resources.  Eventually David Fox would come on to help with SCUMM scripting.

Three people. The way games were meant to be made.

If this document (and the Monkey Island Design Notes) say anything, it's how much ideas change from initial concept to finished game. And that's a good thing. Never be afraid to change your ideas. Refine and edit. If your finished game looks just like your initial idea, then you haven't pushed and challenged yourself hard enough.

It's all part of the creative process. Creativity is a messy process. It wants to be messy and it needs to be messy.

mmdd_page_0_thumb.jpgmmdd_page_1_thumb.jpgmmdd_page_2_thumb.jpgmmdd_page_3_thumb.jpgmmdd_page_4_thumb.jpgmmdd_page_5_thumb.jpg

mmdd_fig_1_thumb.jpgmmdd_fig_2_thumb.jpgmmdd_fig_3_thumb.jpgmmdd_fig_4_thumb.jpgmmdd_fig_5_thumb.jpgmmdd_fig_6_thumb.jpgmmdd_fig_7_thumb.jpgmmdd_fig_8_thumb.jpg

News stories from Friday 18 July, 2014

Favicon for Grumpy Gamer 17:48 Monkey Poster » Post from Grumpy Gamer Visit off-site link

More crap from my storage unit.

monkey_poster_thumb.jpg

Print your own today!



News stories from Thursday 17 July, 2014

Favicon for Grumpy Gamer 01:50 Maniac Mansion Design Notes » Post from Grumpy Gamer Visit off-site link

While cleaning out my storage unit in Seattle, I came across a treasure trove of original documents and backup disks from the early days of Lucasfilm Games and Humongous Entertainment. I hadn't been to the unit in over 10 years and had no idea what was waiting for me.

Here is the first batch... get ready for a week of retro... Grumpy Gamer style...

First up...



mm_design_1_thumb.jpg

A early mock-up of the Maniac Mansion UI. Gary had done a lot of art long before we had a running game, hence the near finished screen without the verbs.



mm_design_2_thumb.jpg

A map of the mansion right after Gary and I did a big pass at cutting the design down.  Disk space was a bigger concern than production time. We had 320K. That's right. K.



mm_design_3_thumb.jpg

Gary and I were trying to make sense of the mansion and how the puzzles flowed together. It wouldn't be until Monkey Island that the "puzzle dependency chart" would solve most of our adventure game design issues.



mm_design_4_thumb.jpg

More design flow and ideas. The entire concept of getting characters to like you never really made it into the final game. Bobby, Joey and Greg would grow up and become Dave, Syd, Wendy, Bernard, etc..



mm_design_5_thumb.jpg

A really early brainstorm of puzzle ideas. NASA O-ring was probably "too soon" and twenty-five years later the dumb waiter would finally make it into The Cave.


I'm still amazed Gary and I didn't get fired.




News stories from Tuesday 15 July, 2014

Favicon for Grumpy Gamer 22:08 Ten Years Running! » Post from Grumpy Gamer Visit off-site link

old_gg_title.gif

Time flies. The gaming and internet institution known as the Grumpy Gamer Blog has been around for just over ten years.

My first story was posted in May of 2004. Two thousand and four. I'll let that date sink in. Ten years.

The old Grumpy Gamer website was feeling "long in the tooth" and it was starting to bug me that Grumpy Gamer was still using a CRT monitor. He should have been using a flat screen, or more likely, just a mobile phone, or maybe those Google smart contact lens. He would not have been using an Oculus Rift. Don't get me started.

I coded the original Grumpy Gamer from scratch and it was old and fragile and I dreaded every time I had to make a small change or wanted to add a feature.

A week ago I had an the odd idea of doing a Commodore 64 theme for the entire site, so I began anew. I could have used some off-the-shelf blogging tool or code base, but where's the fun in that. Born to program.

I'm slowly moving all the old articles over. I started with the ones with the most traffic and am working my way down. I fundamentally changed the markup format, so I can't just import everything. Plus, there is a lot of crap that doesn't want to be imported.  I still need to decide if I'm going to import all the comments. There are a crap-ton of them.

I'd also like to find a different C64 font. This one has kerning, but it lacks unicode characters, neither of which are truly "authentic", but, yeah, who cares.

But the honest truth is...

I've been in this creative funk since Scurvy Scallywags Android shipped and I find myself meandering from quick prototype to quick prototype. I'll work on something for a few days and then abandon it because it's pointless crap. I think I'm up to eight so far.

The most interesting prototype is about being lost in a cavern/cave/dungeon. The environment programmatically builds itself as you explore. There is no entrance and no exit. It is an exercise in the frustration of being lost. You can never find your way out. You just wander and the swearing gets worse and worse as you slowly give up all hope.

I have no sense of direction, so in some ways, maybe it was a little personal in the way I suppose art should be.

I worked on the game for about a week then gave up. Maybe the game was more about being lost than I thought.

Rebuilding Grumpy Gamer was a way to get my brain going again. It was a project with focus and an end. As the saying goes: Just ship something. So I did.

The other saying is: "The Muse visits during the act of creation, not before."

Create and all will follow. Something to always keep in mind.


News stories from Monday 14 July, 2014

Favicon for Grumpy Gamer 18:05 Commodore 64 » Post from Grumpy Gamer Visit off-site link

commodore-64-ad.pngc64_startup.gifmm-c64-porch.pngmm_c64_title.png

News stories from Sunday 13 July, 2014

Favicon for #openttdcoop 17:09 YETIs have arrived! » Post from #openttdcoop Visit off-site link

Ladies and madmen, I am happy to announce that the project I have been working on for the last few months, has grown to it’s first release! First of all I would like to give a huge thanks to frosch who helped me greatly to get the production mechanism working, but also Alberth for trying to help me as much as possible. I would also like to thank all the people like planetmaker for answering my endless and often stupid questions about NML in general. I greatly appreciate everyone who have supported me with any feedback!

blog_01_1500

After 3 months I have managed to model 14 industries and code all of them in the last two weeks.
Creating some industries took more than the others, especially huge amount of effort was put into 3-X Machinery Factory, with all the robots being animated and the car being assembled.
While some look less simple, they often had some problem I had to overcome, but in the end it all works at least somehow. 🙂
Only the Worker Yard gets the 404 graphic for now.

Functionally:
– The Worker Yard outputs amount of YETI dudes based on current year (so it will grow no matter what), but the production can be increased by Food and Building Materials. Both of Food and Building Materials should have the same effect.
– Other industries all work simply based on Consume->Produce method, even “primaries”. This is done over time so you do not get all of the production immediately. 10% of cargo currently waiting is consumed and produced.
– I do not know in what way do industries die, find out! 😀

– There are only 15 industries, Plantation / Orchard is missing due to missing sprites And coincidentally I am somehow unable to add 16th industry… to be added later.

NUTS Unrealistic Train Set 0.7.2 Universal wagons are able to load YETIs (Workers), and will show specific sprites. Older NUTS versions also work but will show just flatbed crate graphics.

Currently the file has 30MB, and I have not yet added a single animation. Right now I just want to release this as 0.0.1, and add fancy things later (animations will probably go asap).

Sooo, enjoy it 🙂

V

News stories from Friday 11 July, 2014

Favicon for Grumpy Gamer 01:11 Monkey Bucks » Post from Grumpy Gamer Visit off-site link

monkey_bucks_front.jpgmonkey_bucks_back.jpg

Favicon for Grumpy Gamer 00:48 Booty From My Seattle Storage Space! » Post from Grumpy Gamer Visit off-site link

IMG_1276-small.jpg



News stories from Thursday 10 July, 2014

Favicon for Code Penguin 19:16 My experiences with Kwixo » Post from Code Penguin Visit off-site link

Kwixo is supposedly a response to PayPal, by some French banks.

I tried to use it to allow a simpler way to pay for the Weboob Association membership fee. PayPal is out anyway, given the fees it charges, we’d be lucky to see half of the actual fee make it back to a bank account.

We’ve tried two times. With the first member it failed because it was asking so many verifications he gave up. With the second one, given that his bank was one of Kwixo’s partners, it worked. Or so I thought!

After sending me an e-mail telling me it was received, one day later (a Saturday!) they tried to call me1. For something that is supposedly on the Internet, why not send an e-mail instead? Anyway, they told me the service was only an exchange between individuals, and since they saw the mention of “Cotisation” in the payment reason I had to register with their Association service by calling another number.

The thing is, I shouldn’t have to do this. This isn’t worth the hassle, and thus will be my last interaction with them. What this story tells us however is that they must get so little business they can still screen all transaction motives, and afford to call people instead of having some sort of semi-automated support system.

Anyway, most of the membership fees have been paid in cash, and the others SEPA. For more details, see here.

The BitPay option is for people with no access to SEPA, but is unlikely to be used anytime soon. But at least, I was able to explain what I would be using them for by e-mail.

However, I didn’t learn my lesson. I thought Kwixo could work, the other way, as a client. Unfortunately, I forgot to never trust a French bank.

I ordered supplies from a website, and chose to pay on delivery, by using Kwixo as an escrow. After all, it was my first order there, and I could use the extra safety.

They asked for a lot of personal details, to an extent I was never asked before; it already started smelling like a scam. The worst was that they first asked some documents, which I sent promptly, and they replied after a day that I forgot to send some others, even though they did not ask for them in the first place. This cycle took a whole week, and choked on the fact that my latest electricity bill was deemed “too old”, despite me explaining that it was the absolute latest.

So I told them to go fuck themselves – literally. They did not budge, and I figured they actually never read any text in the mails! So I sent an image showing them to go fuck themselves. It worked; they canceled the order, and I was able to order again without using them. I suspect the people I was interacting with did not even speak French.

This “fraud protection” lost Kwixo a customer, and almost lost the website a customer. Funny thing is, just looking at the order would make any fraud suspicions silly: the total was well below the machine it was for. Why would I steal that when I already paid much more? Is the car dealership afraid clients will steal their pens?

  1. I rarely answer to unknown numbers, as I dislike the unsolicited nature of phone calls.
Favicon for Code Penguin 18:50 In case you still think banks know what they are doing » Post from Code Penguin Visit off-site link

Working with Weboob has confirmed my suspicions that banks’ IT departments are clueless (at least the French ones).

It’s not only that they have terrible websites with snake-oil security (i.e. keypads are easily logged, they only bother regular users).

It’s that their approach to security is from another world. When I was working with a client that was a bank a few years ago, they forced on us a lot of stupid things in the name of security, but to make things work the chosen solutions were worse from every point of view, including actual security.

This is not a technical problem; the problem is a lack of technical people where they should be.

The cherry on the cake is the BNP Paribas bank. They have been historically terrible at configuring their DNS server (with a tendency to return a different IP depending on yours, and of course those two IPs gave two different versions of the site… unless one of them was out of commission).
And now, for over a year, they have been forcing SSL connections to RC4 128 bits, which is a known weak cipher. If you try to force something better, the server will reject you!

Banks try hard to be taken seriously, and they usually are. I just can’t help laughing at them.

News stories from Tuesday 08 July, 2014

Favicon for Ramblings of a web guy 16:25 Keeping your data work on the server using UNION » Post from Ramblings of a web guy Visit off-site link
I have found myself using UNION in MySQL more and more lately. In this example, I am using it to speed up queries that are using IN clauses. MySQL handles the IN clause like a big OR operation. Recently, I created what looks like a very crazy query using UNION, that in fact helped our MySQL servers perform much better.

With any technology you use, you have to ask yourself, "What is this tech good at doing?" For me, MySQL has always been excelent at running lots of small queries that use primary, unique, or well defined covering indexes. I guess most databases are good at that. Perhaps that is the bare minimum for any database. MySQL seems to excel at doing this however. We had a query that looked like this:

select category_id, count(*) from some_table
where
    article_id in (1,2,3,4,5,6,7,8,9) and
    category_id in (11,22,33,44,55,66,77,88,99) and
    some_date_time > now() - interval 30 day
group by
    category_id
There were more things in the where clause. I am not including them all in these examples. MySQL does not have a lot it can do with that query. Maybe there is a key on the date field it can use. And if the date field limits the possible rows, a scan of those rows will be quick. That was not the case here. We were asking for a lot of data to be scanned. Depending on how many items were in the in clauses, this query could take as much as 800 milliseconds to return. Our goal at DealNews is to have all pages generate in under 300 milliseconds. So, this one query was 2.5x our total page time.

In case you were wondering what this query is used for, it is used to calculate the counts of items in sub categories on our category navigation pages. On this page it's the box on the left hand side labeled "Category". Those numbers next to each category are what we are asking this query to return to us.

Because I know how my data is stored and structured, I can fix this slow query. I happen to know that there are many fewer rows for each item for article_id than there is for category_id. There is also a key on this table on article_id and some_date_time. That means, for a single article_id, MySQL could find the rows it wants very quickly. Without using a union, the only solution would be to query all this data in a loop in code and get all the results back and reassemble them in code. That is a lot of wasted round trip work for the application however. You see this pattern a fair amount in PHP code. It is one of my pet peeves. I have written before about keeping the data on the server. The same idea applies here. I turned the above query into this:

select category_id, sum(count) as count from 
(
    (
        select category_id, count(*) as count from some_table
        where
            article_id=1 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=2 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=3 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=4 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=5 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=6 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=7 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=8 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
    union all
    (
        select category_id, count(*) as count from some_table
        where
            article_id=9 and
            category_id in (11,22,33,44,55,66,77,88,99) and
            some_date_time > now() - interval 30 day
        group by
            category_id
    )
) derived_table
group by
    category_id
Pretty gnarly looking huh? The run time of that query is 8ms. Yes, MySQL has to perform 9 subqueries and then the outer query. And because it can use good keys for the subqueries, the total execution time for this query is only 8ms. The data comes back from the database ready to use in one trip to the server. The page generation time for those pages went from a mean of 213ms with a standard deviation of 136ms to a mean of 196ms and standard deviation of 81ms. That may not sound like a lot. Take a look at how much less work the MySQL servers are doing now.

mysql graph showing decrease in rows read

The arrow in the image is when I rolled the change out. Several other graphs show the change in server performance as well.

The UNION is a great way to keep your data on the server until it's ready to come back to your application. Do you think it can be of use to you in your application?

News stories from Monday 09 June, 2014

Favicon for Ramblings of a web guy 16:14 Parenting When Your Kid Is "An Adult" » Post from Ramblings of a web guy Visit off-site link
When I dropped out of college at 19, I came home to my parents' house. My parents had moved since I had left home. There was no room for me in the new house. I was not there to claim one when they moved in. My Dad and I put up a wall with paneling on it to enclose part of the garage. We cut a hole in the air duct that was in that space. Tada! That was now my bedroom. My room consisted of a concrete floor, three walls with paneling, one concrete block wall, a twin bed (I had a king size when I left 15 months before), and maybe a table. I was not happy. But, that is what was offered to me. I kind of held a grudge about that for a while.

As of right now, my oldest son is 18 years old. He starts college in the fall. I am so very proud of him. He was accepted to an honors program. His grades and testing earned him scholarships. His future is very bright. For this summer, though, he is still home. He has no job. Our attempts to get him to get one have fallen short. He is not motivated to do so. I refuse to go find him one. So, I am giving him one. In exchange for room and board, gas for his car, his car, his car insurance and whatever money is left after those expenses are paid going into his pocket, he will be my assistant. He will fetch his siblings from various places, run errands for me, do extra chores around the house, and anything else I need. To earn his car, he has been doing the "personal driver" service for a while for me. I am expecting more of him this summer though. This arrangement has its good days and bad days.

Today, I suddenly realized why my parents put me in that basement. The bad news for my son is that our basement is darker, dirtier, hotter and a lot less comfortable than the one I lived in at 19 years old. Let's hope I don't get to the point where I want to put him down there.

News stories from Monday 12 May, 2014

Favicon for Helge's Blog 19:55 Warum Michel Reimon nach Brüssel muss » Post from Helge's Blog Visit off-site link
text

Michel Reimon und Niko Alm (Neos) beginnen eine Menschenkette für einen Hypo-Untersuchungssausschuss, Feb. 2014. Foto: Der Standard / Matthias Cremer

In zwei Wochen ist Europawahl und ich wähle Michel Reimon (Blog/Twitter), Zweiter auf der Grünen Liste, mit Vorzugsstimme.

Ich kenne Michel aus seiner Zeit als Autor und Journalist, als er Ende 2007 per Rundmail gegen das zu der Zeit beschlossene, skandalöse Sicherheitspolizeigesetz mobil machte. Ich organisierte damals die Metternich-2.0-Onlinedemo, an der sich rund 200 Websites beteiligten, und aus Michels Rundmail wurde ein regelmäßiger “Demokratischer Salon”, der sich monatelang regelmäßig in Wiener Kaffeehäusern traf.

Kurz darauf wechselte Michel in die burgenländische Landespolitik und fiel weiterhin mit klugen Texten auf. Sein Artikel “Bequem im Filz“, geschrieben mitten im fidelen Ernst-Strasser-Fingerpointing, zeigte auf, wie Korruption bei uns selbst beginnt. Überhaupt ist eine differenzierte, besonnene Sichtweise sein Markenzeichen. Lesenswerte Beispiele finden sich in seiner Reportage aus Syrien während des Konflikts um die Mohammed-Karikaturen, seiner Abrechnung mit dem Freihandelsabkommen TTIP oder immer wieder auch in sehr persönlichen Texten, etwa über Kränkung oder Frustration.

Und natürlich Netzpolitik. Schon 2009 bekam die Netzpolitikerin Eva Lichtenberger meine Stimme. Auch wenn man hierzulande von ihr wenig hörte, ihr Impact hinter den Kulissen war beträchtlich. Eine Suche nach ihrem Namen auf Heise.de vermittelt eine Ahnung davon.

2009 kam Eva Lichtenberger nur knapp ins Parlament, heuer ist es für Michel Reimon ebenso knapp: Die niederösterreichischen Grünen stecken €200.000 in einen Vorzugsstimmenwahlkampf, mit dem sie Madleine Petrovic nach Brüssel entsorgen wollen. Was Michel Reimon auf den dritten Listenplatz verdrängen würde – der vorraussichtlich nicht ins Parlament kommt.

2014 ist Netzpolitik wichtiger denn je, denn dass die technische Infrastruktur für den modernen Überwachungsstaat längst existiert, hat sich auch jenseits von Hackerkreisen rumgesprochen. Reimon ist einer der wenigen politischen Köpfe, die Netzpolitik verstehen, im großen Zusammenhang wie in ihrer Konsequenz für jeden einzelnen von uns, für die Gesellschaft und ihre Kultur.  Darum ist es wichtig, Michel eine Vorzugsstimme zu geben und dafür zu sorgen, dass das alle tun, denen an Netzpolitik gelegen ist.

Als erster Schritt bietet sich ein Beitritt hier an: Ich wähl’ Michel. Wir brauchen schlaue und integre Köpfe wie ihn im Europaparlament.

News stories from Sunday 04 May, 2014

Favicon for Fabien Potencier 23:00 The rise of Composer and the fall of PEAR » Post from Fabien Potencier Visit off-site link

A couple of months ago, Nils Adermann sent me a nice postcard that reminded me that "3 years ago, we [Nils and me] met for the SymfonyLive hackday in San Francisco." Nils was attending the Symfony conference as he announced the year before that phpBB would move to Symfony at some point.

At that time, I was very interested in package managers as I was looking for the best way to manage Symfony2 bundles. I used PEAR for symfony1 plugins but the code was really messy as PEAR was not built with that use case in mind. The philosophy of Bundler from the Ruby community looked great and so I started to look around for other package managers. After a lot of time researching the best tools, I stumbled upon libzypp and I immediately knew that this was the one. Unfortunately, libzypp is a complex library, written in C, and not really useable as is for Symfony needs.

As a good package manager to let user easily install plugin/bundles/MODs was probably also a big concern for phpBB, I talked to Nils about this topic during this 2011 hackday in San Francisco. After sharing my thoughts about libzypp, "..., I [Nils] wrote the first lines of what should become Composer a few months later".

Nils did a great job at converting the C code to PHP code; later on Jordi joined the team and he moved everything to the next level by implementing all the infrastructure needed for such a project.

So, what about PEAR? PEAR served the PHP community for many years, and I think it's time now to make it die.

I've been using PEAR as a package manager since my first PHP project back in 2004. I even wrote a popular PEAR channel server, Pirum (http://pirum.sensiolabs.org/). But today, it's time for me to move on and announce my plan about the PEAR channels I'm managing.

I first tweeted about this topic on February 13th 2014: "I'd like to stop publishing PEAR packages for my projects; #Composer being widespread enough. Any thoughts? #Twig #Swiftmailer #Symfony #php". And on the 14th, I decided to stop working on Pirum: "My first step towards PEAR deprecation: As of today, #Pirum is not maintained anymore. http://pirum.sensiolabs.org/ #php"

As people wanted some stats about the PEAR Symfony channel, I dug into my logs and figured out that most usage came from PHPUnit dependencies: "Stats are clear: my PEAR channels mostly deliver packages related to PHPUnit: Yaml, Console, and Finder. /cc @s_bergmann".

On April 20th 2014, Sebastian Bergmann started the discussion about PEAR support for PHPUnit: "Do people still install PHPUnit via PEAR? Wondering when I can shut down http://pear.phpunit.de". I immediately answered that: "If @s_bergmann stops publishing PEAR packages, I'm going to do the same for #symfony as packages were mainly useful only for #PHPUnit".

And the day after, Sebastian published his plan for deprecating the PHPUnit PEAR channel: "So Long, and Thanks for All the PEARs: https://github.com/sebastianbergmann/phpunit/wiki/End-of-Life-for-PEAR-Installation-Method".

More recently, Pádraic Brady also announced the end of the PEAR channel for Mockery.

Besides Symfony, I also manage PEAR channels for Twig, Swiftmailer, and Pirum. So, here is my plan for all the PEAR channels I maintain:

  • Update the documentation to make it clear that the PEAR channel is deprecated and that Composer is the preferred way to install PHP packages (already done for all projects);

  • Publish a note about the PEAR channel deprecation on the PEAR channel websites (already done for all projects);

  • Publish a blog post to announce the deprecation of the PEAR installation mechanism (Twig, Swiftmailer, and Symfony);

  • Stop releasing new PEAR packages;

  • Remove the PEAR installation mechanism from the official documentation (probably in September this year).

Keep in mind that I'm just talking about stopping publishing new packages and promoting Composer as the primary way to install my libraries and projects; the current packages will continue to be installable for the foreseeable future as I don't plan to shut down the PEAR channels websites anytime soon.

On a side note, it's probably a good time to remove PEAR support from PHP itself; and I'm not sure that it would make sense to bundle Composer with PHP.

Happy Composer!

News stories from Wednesday 30 April, 2014

Favicon for Grumpy Gamer 01:48 Who Are These Pirates? » Post from Grumpy Gamer Visit off-site link

WhoAreThesePirates.jpg

This has always bugged me. Now that I've pointed it out, it's going to bug you too.

News stories from Saturday 19 April, 2014

Favicon for Grumpy Gamer 01:00 What is an indie developer? » Post from Grumpy Gamer Visit off-site link

What makes a developer "indie"?

I'm not going to answer that question, instead, I'm just going to ask a lot more questions, mostly because I'm irritated and asking questions rather than answering them irritates people and as the saying goes: irritation makes great bedfellows.

What irritates me is this almost "snobbery" that seems to exist in some dev circles about what an "indie" is. I hear devs who call themselves "indie" roll their eyes at other devs who call themselves "indie" because they "clearly they aren't indie".

So what makes an indie developer "indie"?  Let's look at the word.

The word "indie" comes from (I assume) the word "independent".  I guess the first question we have to ask is: independent from what? I think most people would say "publishers".

Yet, I know of several devs who proudly call themselves "indie" when they are taking money from publishers (and big publishers at that) and other devs that would sneer at a dev taking publisher money and calling themselves "indie".

What about taking money from investors? If you take money are you not "indie"? What about money from friends or family? Or does it have to be VCs for you to lose "indie" status?

What about Kickstarter?  I guess it's OK for indies to take money from Kickstarter. But are you really "independent"?  3,000 backers who now feel a sense of entitlement might disagree. Devs who feel an intense sense of pressure from backers might also disagree.

Does being "indie" mean your idea is independent from mainstream thinking? Is being an "indie developer" just the new Punk Rock.

Does the type of game you're making define you as "indie"? If a dev is making a metrics driven F2P game, but they are doing it independent of a publisher, does that mean they are not "indie"?

This is one of the biggest areas I see "indie" snobbery kick in.  Snobby "indie" devs will look at an idea and proclaim it "not indie".

Do "indie" games have to be quirky and weird? Do "indie" games have to be about the "art".

What about the dev? Does that matter? Someone once told me I was not "indie" because I have an established name, despite the fact that the games I'm currently working on have taken no money from investors or publishers and are made by three people.

What if the game is hugely successful and makes a ton of money? Does that make it not "indie" anymore? Is being "indie" about being scrappy and clawing your way from nothing? Once you have success, are you no longer "indie"?  Is it like being an "indie band" where once they gain success, they are looked down on by the fans? Does success mean selling-out? Does selling-out revoke your "indie dev" card?

What if the "indie" developer already has lots of money? Does having millions of dollars make them not "indie"? What if they made the money before they went "indie" or even before they started making games or if they have a rich (dead) aunt? Does "indie" mean you have to starve?

Is it OK for an "indie" to hire top notch marketing and PR people? Or do "indies" have to scrape everything together themselves and use the grassroot network?

Or does "indie" just mean you're not owned by a publisher? How big of a publisher? It's easy to be a publisher these days, most indies who put their games up on Steam are "publishers". The definition of a publisher is that you're publishing the game and the goal of a lot of studios is to "self-publish".

Or does being "indie" just mean you came up with the idea?  The Cave was funded and published by SEGA, so was it an "indie" title? SEGA didn't come up with the idea and exerted no creative control, so does that make it an "indie" title?

I don't know the answers to any of these questions (and maybe there aren't any), but it irritates me that some devs (or fans) look down on devs because they are not "indie" or not "indie enough".

Or is being "indie" just another marketing term? Maybe that's all it means anymore. It's just part of the PR plan.



News stories from Wednesday 09 April, 2014

Favicon for #openttdcoop 14:31 YETI Extended Towns & Industries » Post from #openttdcoop Visit off-site link

Hello!

Just like about 3 years ago when I announced first concepts of NUTS, this time I am glad to announce that I started to sketch schemes and industries for YETI.
This article is for one like my notepad so I remember the core idea, and to let you know and/or give feedback what you think about the concepts.
To demonstrate my ideas I have created some scheme images below.

YETI_toyland

YETI_sane

Introduction

Years ago, NUTS started being developed because other train newGRFs had so many limitations, the only hope I saw was in creating a new train set which would attempt to fix those gameplay hurting parts, and extending it all with my own experience.
YETI situation is similar, yet different. Similar because currently the industry newGRFs each have a lot of downsides.

With original industries most people generally get bored after some time and start searching for something new. And so they find ECS, Pikka Basic Industries, OpenGFX+ Industries and FIRS.

But since ECS is completely unusable due to its limiting features and strange production mechanics, that is one down.
Another one out of the game is Pikka Basic Industries as they not only have strange limitations like steel mill requiring precise amount of coal and iron ore to work, but most importantly industries just die when they empty out.
From remaining options, OpenGFX+ is great, but it is “just” the original mechanism – transport and it grows, nothing more, nothing less. While this should not be underestimated – it is still a ton of fun as the concept has been confirmed to work by numerous players of OpenTTD for years, for industry newGRF people generally search also for some new mechanism how it works.
Last but not least, FIRS has minimum of limiting inconvenient features while adding a whole new mechanism of supplying industries, and adds a TON of new cargoes/industries – you can even choose them to some extent by economies. In general FIRS is great (at least in the beginning), but…
The problem of FIRS is that cargoes which are able to produce supplies become a “better tier” automatically as you do not have any reason to use the other cargoes, not to mention the insane amount of effort you have to put into connecting e.g. the clustered farms – for which you do not get any reward.
In the end you return to OpenGFX+ or Original industries as they simply work, which is unfortunate.
YETI is trying to create a simple yet interesting system which would be fun to play, without overwhelming complexity but allowing for different approaches and ways to play it.

Main YETI system

Now of course you are probably asking how do I want to achieve this. Learning from the downsides and upsides of other sets, I would say that some kind of mechanism like supplies is very nice as it suggests the network to connect everything together so supplies can be distributed. So I added supplies (Toys/Machinery) which improve the primary industries in some way.

To avoid confusion FIRS creates, every primary has to be useful and contribute to the supplying mechanism somehow. And not to self-reproduce, there are two different kinds of supplies – Workers which improve industry production, and Toys/Machinery which make the production fluctuate (decrease) less.

Workers also create a new link to towns and their size so they also play a role. With that come into play two chains which boost town size, and amount of workers per citizen.

What all that means: You can service one industry chain and survive, but the system motivates to connect all chains together – not necessarily in a perfect balance as they all contribute to the whole system somehow. You do not get punished for lacking something, you only get rewarded for caring for your industries better.

Other YETI details

In order to motivate to connect more towns, I intend to make the worker amounts grow in a linear fashion up to e.g. 500 food and 500 building materials delivered per month. But if you deliver more than that amount, your delivery starts being less effective. This means growing a gigantic town is an option, but it would probably make more sense to give care to multiple towns instead.
At the same time though, when redistributing things you probably “lose” some amount of cargo by not 100% precise distribution, so the multiple town strategy would be viable but not too overpowered.

The biggest problem of original industry mechanism generally is that their production stays sane until later years, but then it explodes to astronomic values like 2295 cargo units per month. I think this value is way too high under no other conditions.
Such condition being for example that you have to dump enormous amount of workers/machinery into the industry in order to produce that amount – which generally means you focus on that industry and you do not have many other industries, so the 2295 does not hurt as much.
Obviously similar mechanism like with towns would have to apply – after some amount of workers/machinery, the supplying would be less efficient so it would make good sense to give your industries enough to produce e.g. 500 cargo units monthly.

Another important detail FIRS does is clustering, which generally means the company has to use whole map in order to get all kinds of cargoes. I am definitely not going follow that path and make industries just spawn randomly over the whole map instead, so multiplayer games, where each company gets its own piece of land, are unharmed.

What will it look like?

Pixel graphics are great, nice, amazing, and keep the TTD look – the downside is that they are also extremely time consuming – especially as industries are a ton of pixels, and I need to learn more 3D things for my job.
So, ultimate solution arose – I am going to model and render all of YETI so you can look forward to extra zoom sprites.
General graphical style is going to be similarly wtf to NUTS – weird things and hidden jokes, but the colour scheme being sane (not like toyland).
What all can come, only the Yetis know.
Obviously NUTS is going to be fully compatible – NUTS will get new cargo sprites for that, in case some are missing.

WHEN?!11!1!!!1!

14.7.2014 🙂

Conclusion

I just wanted to let you know that I am working on something new, and if you have some constructive ideas, I am interested to hear them. In case you wanted to help, I will certainly need somebody to code the thing as I want to 100% focus on graphics this time.
Thank you for reading and your upcoming ideas.
I am not going to be active on the IRC as I used to be, so please if you have something to say do so here in the comments below.

P.S. YETI is not just a name!
V453000 the Yeti

News stories from Monday 07 April, 2014

Favicon for Grumpy Gamer 16:34 Monkey Island Design Notebook Scribblings » Post from Grumpy Gamer Visit off-site link

More scans from the Monkey Island Design Notebook. I'm glad I kept these notebooks, it's a good reminder of how ideas don't come out fully formed.  Creation is a messy process with lots of twisty turns and dead ends.  It's a little sad that so much is done digitally these days. Most of my design notes for The Cave were in Google Docs and I edited them as I went, so the process lost. Next game, I'm keeping an old fashion notebook.

Mark Ferrari or Steve Purcell must have done these. I can't draw this good!

IMG_0501_thumb.jpg

A lot changed here!

IMG_0499_thumb.jpg

Getting the Main Flow right is critical!

IMG_0496_thumb.jpg


News stories from Tuesday 01 April, 2014

Favicon for Grumpy Gamer 01:00 April Fool's Day is Stupid! » Post from Grumpy Gamer Visit off-site link

Wow! For ten years in a row, Grumpy Gamer has been completely April Fool's Day free.

If you need a break from the entire Internet waking up and thinking they are funny (they are not), then this is your sanctuary.

And as a reward for choosing Grumpy Gamer as your place of escape, here is a very early early page from the Monkey Island Design Notebook that features time travel! I discarded this very quickly, but I've always had a fascination with time travel in games.

You can see it in the premise Gary and I laid out for Day of the Tentacle, then again in Putt-Putt Travels Through Time, also in my un-released game Good & Evil, then again in DeathSpank (although not technically time travel) and finally in The Cave.

And in Monkey Island.

IMG_0501_thumb.jpg

News stories from Monday 31 March, 2014

Favicon for #openttdcoop 20:22 2ND NUTS BIRTHDAY » Post from #openttdcoop Visit off-site link

HELLO!

It is happening again, 1st April is here!

Just like last year and the year before that, we can celebrate the existence of NUTS with a new version again.

In the past year NUTS made a lot of progress even though it seemed quite “complete” even last year. Most notably were added some trains (who knew) which improves the choice a player can make – and also the awesome factor of the set. Namely, PURR tracks got eaten by NUTS as they are now integrated in one newGRF – which happened because MEOW trains are completely dependent on them. Also WetRail vehicles (the wet tracks which were the main surprise in last year birthday) got completely redone and have now 10 vehicles! Plus many other details, but RAINBOW SLUGS are a thing to never forget!

At the same time, NUTS got so huge that became very confusing to new(er) players. Especially lately, I have been trying to make this barrier easier to breach – first of all by adding the wiki which is trying to explain how each vehicle works. Not just that though, another one of steps in that direction making vehicles expire – so only the useful ones remain. And with this version I made all vehicles 1 tile long so you can autoreplace any to any without losing wagons or getting into problems. My intention also was to introduce an Ultimate wagon which could attach to any train and adapt sprites/parameters accordingly, but the obstacles in coding such a thing were way too large. So, I at least added a parameter to NUTS (gets its first parameter after 2 years =D) which makes only universal wagons purchaseable for more simplicity. For full choice you can simply switch it back at any point.

It might be noteworthy that NUTS reached above 100 000 total downloads not long ago, currently counting 100992 times for versions 0-67.

Enjoy! (:

Favicon for Grumpy Gamer 16:52 Even More Monkey Island Design Scribbles. » Post from Grumpy Gamer Visit off-site link

I am not going to throw these out! That was a joke! Several years ago they got water damaged, so now they are sealed in water proof wrapping and kept safe and insured for $1,000,000.

Also, this is not the "design document", they are just notes and ideas I'd jotted down.  There wasn't a formal design document for the game, just the large complete puzzle dependency chart I keep on my wall. I have no idea where that went to.

Many more to come. Posting these is easier then writing actual blob entries. I'm lazy.

Notes and ideas for Ghost ship and on Monkey Island.

IMG_0495_thumb.jpg

The dream sequence had to wait until Monkey Island 2.

IMG_0498_thumb.jpg

Room layout sketches.

IMG_0506_thumb.jpg


News stories from Friday 28 March, 2014

Favicon for Grumpy Gamer 17:09 More Tales From The Monkey Island Design Notebook » Post from Grumpy Gamer Visit off-site link

Very early brainstorming about ideas and story.

IMG_0512_thumb.jpg

First pass at some puzzles on Monkey Island

IMG_0497_thumb.jpg

Just writing ideas down. I'm surprised "get milk and bread" doesn't appear on this.

IMG_0505_thumb.jpg

Map when ship sailing was more top-down and direct controlled.

IMG_0492_thumb.jpg


Favicon for Grumpy Gamer 03:05 Monkey Island Design Notebook #1 » Post from Grumpy Gamer Visit off-site link

I'm doing some house cleaning and I came across my Monkey Island 1 and 2 design notebooks.  It's interesting to see what changed and what remained the same.

I'll post more... If I don't throw them out. They are smelling kind of musty and I'm running out of space.

My first sketch of Monkey Island

MI1_island_small.jpg

Early puzzle diagram for Largo (before he was named Largo LaGrande)

MI2_puzzle1_small.jpg


News stories from Friday 14 March, 2014

Favicon for nikic's Blog 01:00 Methods on primitive types in PHP » Post from nikic's Blog Visit off-site link

A few days ago Anthony Ferrara wrote down some thoughts on the future of PHP. I concur with most of his opinions, but not all of them. In this post I’ll focus on one particular aspect: Turning primitive types like strings or arrays into “pseudo-objects” by allowing to perform method calls on them.

Lets start off with a few examples of what this entails:

$str = "test foo bar";
$str->length();      // == strlen($str)        == 12
$str->indexOf("foo") // == strpos($str, "foo") == 5
$str->split(" ")     // == explode(" ", $str)  == ["test", "foo", "bar"]
$str->slice(4, 3)    // == substr($str, 4, 3)  == "foo"

$array = ["test", "foo", "bar"];
$array->length()       // == count($array)             == 3
$array->join(" ")      // == implode(" ", $array)      == "test foo bar"
$array->slice(1, 2)    // == array_slice($array, 1, 2) == ["foo", "bar"]
$array->flip()         // == array_flip($array)        == ["test" => 0, "foo" => 1, "bar" => 2]

Here $str is just a normal string and $array just a normal array - they aren’t objects. We just give them a bit of object-like behavior by allowing to call methods on them.

Note that this isn’t far off dreaming, but something that already exists right now. The scalar objects PHP extension allows you to define methods for the primitive PHP types.

The introduction of method-call support for primitive types comes with a number of advantages that I’ll outline in the following:

An opportunity for a cleaner API

The likely most common complaint you get to hear about PHP is the inconsistent and unclear naming of functions in the standard library, as well as the equally inconsistent and unclear order of parameters. Some typical examples:

// different naming conventions
strpos
str_replace

// totally unclear names
strcspn                  // STRing Complement SPaN
strpbrk                  // STRing Pointer BReaK

// inverted parameter order
strpos($haystack, $needle)
array_search($needle, $haystack)

While this issue is often overemphasized (we do have IDEs), it is hard to deny that the situation is rather suboptimal. It should also be noted that many functions exhibit problems that go beyond having a weird name. Often edge-case behaviors were not properly considered, thus creating the need to specially handle them in the calling code. (For the string functions edge-cases usually involve empty strings or offsets that are at the very end of a string.)

A common suggestion is to just add a huge number of function aliases in PHP 6, which will unify the function names and parameter orders. So we’d have string\pos(), string\replace(), string\complement_span() or something in that direction. Personally (and this seems to be the opinion of many php-src devs) this makes little sense to me. The current function names are deeply ingrained into the muscle memory of any PHP programmer and applying a few trivial cosmetic changes to them just doesn’t seem worth it.

The introduction of an OO API for primitive types on the other hand offers an opportunity of an API redesign as a side effect of switching to a new paradigm. It also offers a truly clean slate, without the need to meet any expectations coming with the old procedural API. Two examples:

  • I would very much