Show posts from the last:

News stories from Saturday 23 July, 2016

Favicon for heise Security 16:40 Snowden lehrt iPhones das Whistleblowing » Post from heise Security Visit off-site link
Snowden lehrt iPhones das Whistleblowing

NSA-Whistleblower Edward Snowden denkt öffentlich über eine intelligente Smartphone-Hülle nach, die den Besitzer bei verdächtigen Sendeaktivitäten seines Handys warnen soll. Falls realisierbar soll das Gerät zuerst für das iPhone 6 auf den Markt kommen.

News stories from Friday 22 July, 2016

Favicon for heise Security 18:02 Gratis Entschlüsselungs-Tools nehmen es mit elf Erpressungs-Trojanern auf » Post from heise Security Visit off-site link
Gratis Entschlüsselungs-Tools nehmen es mit elf Erpressungs-Trojanern auf

AVG und Trend Micro haben ihre kostenlosen Tools aktualisiert, mit denen Opfer von diversen Verschlüsselungs-Trojanern unter Umständen wieder Zugriff auf ihre Daten bekommen können.

Favicon for heise Security 16:19 Promi-Mailaccounts gehackt: Gefängnisstrafe für US-Amerikaner » Post from heise Security Visit off-site link
Hacker

Ein junger US-Amerikaner spionierte unter anderem Hollywood-Stars aus, indem er sich per Phishing Zugriff auf über 360 Mailaccounts verschaffte. Dafür wurde er nun verurteilt.

Favicon for heise Security 15:03 Post-Snowden: Bezahlbare Krypto-Engines für alle » Post from heise Security Visit off-site link
Cryptech-HSM-Gehäuse

Was nutzt Verschlüsselung, wenn schon die Hardware kompromittiert ist, fragte sich eine Entwicklergruppe nach den Snowden-Enthüllungen. Daraus entstand das CrypTech-Projekt, das jetzt die Alpha-Version seines Open-Source-Hardware-Security-Modules vorlegt.

Favicon for heise Security 14:11 Sicherheitsfirma Quadsys hat Konkurrenten gehackt » Post from heise Security Visit off-site link
Cyberkriminalität

Mitglieder des Managements einer britischen Security-Firma sollen die Datenbanken einer konkurrierenden Firma gehackt haben, um an Kundendaten zu gelangen. Das haben die Beschuldigten nun auch zugegeben.

Favicon for heise Security 12:53 US-Polizei will Smartphone eines Toten mittels künstlichem Finger entsperren » Post from heise Security Visit off-site link
Fingerabdrücke

Eine US-Polizeibehörde will mittels eines 3D-gedruckten Fingers das Smartphone eines Toten entsperren. Sie erhofft sich, so den Mörder des Smartphone-Besitzers zu fassen.

News stories from Thursday 21 July, 2016

Favicon for heise Security 17:16 Gegen NSA & Co: Snowden stellt Überwachungs-Indikator für iPhones vor » Post from heise Security Visit off-site link
Snowden denkt sich Überwachungs-Indikator für iPhone aus

Der NSA-Enthüller Edward Snowden hat sich mit einem bekannten US-Hacker zusammengetan: Beide wollen Journalisten die Sicherheit geben, dass ihr iPhone nicht von Geheimdiensten gehackt wurde und heimlich Daten funkt.

Favicon for heise Security 17:06 Sicherheits-Update: Typo3 wehrt sich gegen Lauscher » Post from heise Security Visit off-site link
Sicherheits-Update: Typo3 wehrt sich gegen Lauscher

Im CMS Typo3 klaffen sieben Sicherheitslücken; darunter auch die Anfälligkeit für das httpoxy-Problem. Eine neue Version ist verfügbar.

Favicon for heise Security 16:23 Neue PHP-Version schafft httpoxy-Problem aus der Welt » Post from heise Security Visit off-site link
Hacker

PHP 7.0.9 bringt keine neuen Funktionen mit, sondern das Team hat ausschließlich Sicherheitslücken geschlossen.

Favicon for heise Security 15:23 Ciscos Unified Computing System anfällig für Schad-Code » Post from heise Security Visit off-site link
Open Compute Project: OCP-Server bei Facebook in Lulea

Im Unified Computing System Performance Manager klafft eine kritische Sicherheitslücke. Admins sollten die verfügbare abgesicherte Version zügig installieren.

Favicon for heise Security 14:51 Kritischer Fehler: Wichtiges Update für Mac-Netzwerkkontrolleur Little Snitch » Post from heise Security Visit off-site link
Little Snitch

Ein Bug ermöglicht einem Angreifer, den Netzwerkfilter der Mac-Software zu überlisten – die neu veröffentlichte Version soll das Problem ausräumen. Little Snitch überwacht ausgehende Netzwerkverbindungen in Mac OS X.

Favicon for heise Security 12:09 Nextcloud 10 Beta: Mehr Sicherheit und Stabilität » Post from heise Security Visit off-site link
Nextcloud 10 Beta: Mehr Sicherheit und Stabilität

Der Anbieter der Open-Source-Filesharing-Plattform Nexcloud hat das nächste Release angekündigt. Überarbeitet hat er vor allem die Security- und Freigabefunktionen.

Favicon for Symfony Blog 11:35 New in Symfony 3.2: Workflow component » Post from Symfony Blog Visit off-site link

Workflows are a fundamental element in lots of organizations' structures. They describe a sequence of operations that can be executed repeatedly to provide some service (e.g. buying a product in an e-commerce application), process some information (e.g. publishing some content in a CMS application), etc.

In Symfony 3.2 we added a new Workflow component to help you define those workflows in your applications. Technically, the component implements a "workflow net", which is a subclass of the Petri net.

In practice, to create a workflow you define "states" and "transitions" (which are the events that may occur between two states). The following example shows a minimal workflow to publish some content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
framework:
    workflows:
        article_publishing:
            supports:
                - AppBundle\Entity\Article
            places:
                - draft
                - spellchecked
                - published
            transitions:
                spellcheck:
                    from: draft
                    to:   spellchecked
                publish:
                    from: spellchecked
                    to:   published

Now you can start using this workflow in your templates and controllers. For example, in a template:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
{# the workflow name is optional when there is just one workflow for the class #}
{% if workflow_can(article, 'publish') %}
    <a href="...">Publish article</a>
{% endif %}

{# if more than one workflow is defined for the 'Article' class #}
{% if workflow_can(article, 'publish', 'article_publishing') %}
    <a href="...">Publish article</a>
{% endif %}

{# ... #}

{% for transition in workflow_transitions(article) %}
    <a href="...">{{ transition.name }}</a>
{% else %}
    No actions available for this article.
{% endfor %}

In a controller, you can get any defined workflow by its name thanks to the workflow registry created by Symfony and then, apply any given transition to it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use Symfony\Component\Workflow\Exception\LogicException;

public function reviewAction(Article $article)
{
    // the try/catch is needed because this transition could already have been applied
    try {
        $this->get('worflow.article_publishing')->apply($article, 'spellcheck');
    } catch (LogicException $e) {
        // ...
    }
}

If you want to execute custom logic when a transition happens, you can hook listeners to the events triggered by the component.

Check out this demo application for a full example of the workflow component in action and check out this GitHub project for an unofficial port of the component for Symfony 2.3+ versions.


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris
Favicon for heise Security 11:05 Firefox blockt ab August Flash – teilweise » Post from heise Security Visit off-site link
Firefox

Mozilla beginnt im August 2016, Flash-Inhalte in Firefox zu blocken. Zunächst geht es nur um Elemente, die sich problemlos ersetzen lassen.

Favicon for heise Security 09:41 iOS 9.3.3 und OS X 10.11.6: BSI empfiehlt schnelles Update » Post from heise Security Visit off-site link
«El Capitan»

Das Bundesamt für Sicherheit in der Informationstechnik rät zur Installation der in dieser Woche erschienenen Apple-Aktualisierungen. Sie stopfen zahlreiche Sicherheitslücken.

Favicon for heise Security 09:40 Zypries bei der IETF: Staatliche Aufweichung von Verschlüsselung nicht ganz auszuschließen » Post from heise Security Visit off-site link
Zypries bei der IETF: Staatliche Aufweichung von Verschlüsselung nicht ganz auszuschließen

Das Treffen zwischen Politik und Technik am Rande der IETF96 in Berlin brachte keine Entwarnung für den staatlichen Angriff auf Verschlüsselung.

News stories from Tuesday 19 July, 2016

Favicon for A List Apart: The Full Feed 16:00 Adapting to Input » Post from A List Apart: The Full Feed Visit off-site link

Jeremy Keith once observed that our fixed-width, non-responsive designs were built on top of a consensual hallucination. We knew the web didn’t have a fixed viewport size, but we willfully ignored that reality because it made our jobs easier.

The proliferation of mobile devices forced us into the light. Responsive web design gave us the techniques to design for the rediscovered reality that the web comes in many sizes.

And yet there is another consensual hallucination—the idea that desktop equals keyboard and mouse, while phones equal touch.

It’s time to break free of our assumptions about input and form factors. It’s time to reveal the truth about input.

Four truths about input

  1. Input is exploding — The last decade has seen everything from accelerometers to GPS to 3D touch.
  2. Input is a continuum — Phones have keyboards and cursors; desktop computers have touchscreens.
  3. Input is undetectable — Browser detection of touch‚ and nearly every other input type, is unreliable.
  4. Input is transient — Knowing what input someone uses one moment tells you little about what will be used next.

Being adaptable

In the early days of mobile web we created pitfalls for ourselves such as “mobile context.” We’ve since learned that mobile context is a myth. People use their phones everywhere and for any task, “especially when it’s their only or most convenient option.”

When it comes to input, there is a danger of making a similar mistake. We think of a physical keyboard as being better suited to complex tasks than an onscreen keyboard.

But there are many people whose primary access to the internet is via mobile devices. Those same people are comfortable with virtual keyboards, and we shouldn’t ask them to switch to a physical keyboard to get the best experience.

Even for those of us who spend our days on computers, sometimes a virtual keyboard is better. Perhaps we’re on a plane that has started to descend. In that moment, being able to detach a keyboard and work on a touchscreen is the difference between continuing our task or stowing our laptop for landing.

So who are we to judge what input is better? We have no more control over the input someone uses than we do the size of their screen.

Becoming flexible

Confronting the truth about input can be overwhelming at first. But we’ve been here before. We’ve learned how to design for a continuum of screen sizes; we can learn how to adapt to input—starting with these seven design principles.

Design for multiple concurrent inputs

The idea that we’re either designing for desktop-with-a-mouse or touch-on-mobile is a false dichotomy. People often have access to multiple inputs at the same time. Someone using a Windows 10 laptop or a Chromebook Pixel may be able to use the trackpad and touchscreen concurrently.

There are many web pages that detect touch events and then make incorrect assumptions. Some see the touch events and decide to deliver a mobile experience regardless of form factor. Others have different branches of their code for touch and mouse and once you’re in one branch of the code, you cannot switch to the other.

At minimum, we need to ensure that our web pages don’t prevent people from using multiple types of input.

Ideally, we would look for ways to take advantage of multiple inputs used together to create better experiences and enable behavior that otherwise wouldn’t be possible.

Make web pages that are accessible

When someone uses a remote control’s directional pad to interact with a web page on a TV, the browser sends arrow key events behind the scenes. This is a pattern that new forms of input use repeatedly—they build on top of the existing forms of input.

Because of this, one of the best ways to ensure that your web application will be able to support new forms of input is to make sure that it is accessible.

The information provided to help assistive devices navigate web pages is also used by new types of input. In fact, many of the new forms of input had their beginnings as assistive technology. Using Cortana to navigate the web on an Xbox One is not so different than using voice to control Safari on a Mac.

Design for the largest target size by default

A mouse is more precise than our fingers for selecting items on a screen. Buttons and other controls designed for a mouse can be smaller than those designed for touch. That means something designed for a mouse may be unusable by someone using a touchscreen.

However, something designed for touch is not only usable by mouse, but is often easier to select due to Fitts’s Law, which says that “the time to acquire a target is a function of the distance to and size of the target.”

Plus, larger targets are easier for users with lower dexterity, whether that is a permanent condition or a temporary one caused by the environment. At the moment, the largest target size is touch, so this means designing touch first.

As Josh Clark once said, “when any desktop machine could have a touch interface, we have to proceed as if they all do.”

Design for modes of interaction instead of input types

Gmail’s display density settings illustrate the benefit of designing for user interaction instead of input types.

Gmail Interface

By default, Gmail uses a comfortable display density setting. If someone wants to fit more information on the screen, they can switch to the compact display density setting.

It so happens that these two settings map well to different types of input. The comfortable setting is touch-friendly. And compact is well suited for a mouse.

But Gmail doesn’t confine these options to a particular input. Someone using a touchscreen laptop could choose to use the compact settings. Doing so sacrifices the utility of the laptop’s touchscreen, but the laptop owner gets to make that choice instead of the developer making it for her.

Vimeo made a similar choice with their discontinued feature called Couch Mode. Couch Mode was optimized for the 10ft viewing experience and supported remote controls. But there was nothing that prevented someone from using it on their desktop computer. Or for that matter, using the standard Vimeo experience on their TV.

In both cases, the companies designed for use cases instead of a specific form factor or input. Or worse, designing for a specific input inferred from a form factor.

Abstract baseline input

When we’re working on responsive web designs at Cloud Four, we’ve found that the labels “mobile,” “tablet,” and “desktop” are problematic. Those labels create images in people’s minds that are often not true. Instead, we prefer “narrow,” “wide,” “tall,” and “short” to talk about the screens we’re designing for.

Similarly, words like “click” and “tap” betray assumptions about what type of input someone might use. Using more general terms such as “point” and “select” helps prevent us from inadvertently designing for a particular input.

We should also abstract baseline input in our code. Mouse and touch events are entirely different JavaScript APIs, which makes it difficult to write applications that support both without duplicating a lot of code.

The Pointer Events specification normalizes mouse, touch, and stylus events into a single API. This means for basic input, you only have to write your logic once.

Pointer events map well to existing mouse events. Instead of mousedown, use pointerdown. And if you need to tailor an interaction to a specific type of input, you can check the pointerType() and provide alternate logic—for example, to support gestures for touchscreens.

Pointer Events are a W3C standard and the jQuery team maintains a Pointer Events Polyfill for browsers that don’t yet support the standard.

Progressively enhance input

After baseline input has been wrangled, the fun begins. We need to start exploring what can be done with all the new input types available to us.

Perhaps you can find some innovative uses for the gyroscope like Warby Parker’s product page, which uses the gyroscope to turn the model’s head. And because the feature is built using progressive enhancement, it also works with mouse or touch.

Warby Parker UI

The camera can be used to scan credit cards on iOS or create a photo booth in browsers that support getUserMedia. Normal input forms can be enhanced with the accept attribute to capture images or video via the HTML Media Capture specification:

<input type="file" accept="image/*">
<input type="file" accept="video/*;capture=camcorder">
<input type="file" accept="audio/*;capture=microphone">

Make your forms easier to complete by ensuring they work with autofill. Google has found that users complete forms up to 30 percent faster when using autofill. And keep an eye on the Payment Request API, which will make collecting payment simple for customers.

Or if you really want to push the new boundaries of input, the Web Speech API can be used to enhance form fields in browsers that support it. And Physical Web beacons can be combined with Web Bluetooth to create experiences that are better than native.

Make input part of your test plans

Over the last few years, test plans have evolved to include mobile and tablet devices. But I have yet to see a test plan that includes testing for stylus support.

It makes intuitive sense that people check out faster when using autofill, but none of the ecommerce projects that I’ve worked on have verified that their checkout forms support autofill.

We need to incorporate input in our test plans. If you have a device testing lab, make input one of the criteria you use to determine what new devices to purchase. And if you don’t have a device testing lab, look for an open device testing lab near you and consider contributing to the effort.

The way of the web

Now is the time to experiment with new forms of web input. The key is to build a baseline input experience that works everywhere and then progressively enhance to take advantage of new capabilities of devices if they are available.

With input, as with viewport size, we must be adaptable. It is the way of the web.

News stories from Sunday 17 July, 2016

Favicon for Symfony Blog 11:25 A week of symfony #498 (11-17 July 2016) » Post from Symfony Blog Visit off-site link

This week, Symfony's activity slowed down significantly, as it always happens during the mid-summer on the Northern Hemisphere: a bug related to the retrieval of the username when using forwarding was fixed, the list of HTTP safe methods was updated and Serializer added support for argument objects.

Symfony development highlights

2.7 changelog:

  • 500c2cd: [HttpFoundation] added OPTIONS and TRACE to the list of safe methods
  • 30997a4: [Security] fixed the retrieval of the last username when using forwarding

2.8 changelog:

  • 7c39ac1: [ClassLoader] fixed declared classes being computed when not needed

3.1 changelog:

  • 414d9ef: [DoctrineBridge] added missing error code
  • cf691fb: [Serializer] included the format in the cache key

Master changelog:

  • c221908: [Serializer] added support for argument objects

Newest issues and pull requests

They talked about us


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Friday 15 July, 2016

Favicon for Symfony Blog 09:32 New in Symfony 3.2: HttpFoundation improvements » Post from Symfony Blog Visit off-site link

Added support for SameSite cookie attribute

Contributed by
Ian Carroll in #19104.

A new cookie attribute called same-site allows applications to disable third-party usage for any cookie. This helps protect users against CSRF attacks, because without the cookies, attackers can't see you logged in to the websites under attack.

In Symfony 3.2, the Cookie class constructor added a ninth argument called $sameSite that can take any of the values defined in the Cookie::SAMESITE_LAX and Cookie::SAMESITE_STRICT constants:

1
2
3
use Symfony\Component\HttpFoundation\Cookie;

$cookie = new Cookie(..., Cookie::SAMESITE_LAX);

The strict mode prevents any cross-site usage for the cookie. In the lax mode, some top-level GET requests are allowed, such as clicking on a link to another website or sending a form with GET method.

Improved the the response cache headers

Contributed by
Fabien Potencier in #18220 and #19143.

Previously, if you performed a 301 permanent redirect and didn't set a cache header, the no-cache header was added by Symfony. In Symfony 3.2 this behavior has changed and now 301 redirects don't add the no-cache header automatically, but they maintain it if you set it explicitly.

Symfony 3.2 also fixes another inconsistency related to cache headers. When the no-cache header is present, Symfony now also adds the private directive, so the response contains no-cache, private instead of just no-cache.

Added isMethodIdempotent() utility

Contributed by
Kévin Dunglas
in #19322.

HTTP safe methods are those that just retrieve resources but don't modify, delete or create them (only GET and HEAD methods are considered safe). The Request class includes a isMethodSafe() method to check whether the given HTTP method is considered safe or not.

HTTP idempotent methods are those that can be used in a sequence of several requests and get the same result without any other side-effect. For example PUT is idempotent because after two identical requests the resource has still the same state (it's always replaced) but POST is not idempotent because after two identical requests you will end up with two resources that have the same content.

In Symfony 3.2 we added a new method called isMethodIdempotent() to check whether the given HTTP method is idempotent or not.


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Thursday 14 July, 2016

Favicon for A List Apart: The Full Feed 16:00 The Itinerant Geek » Post from A List Apart: The Full Feed Visit off-site link

This spring I spent almost a month on the road, and last year I delivered 26 presentations in eight different countries, spending almost four months traveling. While doing all of this I am also running a business. I work every day that I am on the road, most days putting in at least six hours in addition to my commitments for whichever event I am at. I can only keep up this pace because travel is not a huge stressor in my life. Here are some things I have learned about making that possible, in the hope they are useful to anyone setting off on their first long trip. Add your own travel tips in the comments.

Before you go

During the run-up to going away, I stay as organized as possible. Otherwise I would lose a lot of time just preparing for the trips. I have a Trello board set up with packing list templates. I copy a list and remove or add anything specific to that trip. Then I can just grab things without thinking about it and check them off. I also use Trello to log the status of plans for each trip; for example, do I have a hotel room and flights booked? Is the slide deck ready? Do I know how I am getting from the airport to the hotel? This way I have instant access to the state of my plans and can also share this information if needed.

It is easy to think you will always have access to your information in its original form. However, it is worth printing a copy of your itinerary to keep with you just in case you can’t get online or your phone battery runs out. For times when you don’t have physical access to something at the moment, take photos of your passport and car insurance (if it covers rentals), and upload them somewhere secure.

Your travel may require a visa. If your passport is expiring within six months of your trip, you may want to get a new one — some countries won’t issue a visa on a passport that is due to expire soon. You can in some cases obtain pre-authorization, such as through the American ESTA form for participating in its Visa Waiver Program. This might have changed since your last trip. For example, Canada has introduced an eTA system as of March 2016. I’ve traveled to Canada for ConFoo for the last four years - if I attend next year, I’ll need to remember to apply for this beforehand.

Tell your bank and credit card company that you are traveling to try and avoid their blocking your card as soon as you make a purchase in your destination.

Make sure you have travel insurance that covers not only your possessions but yourself as well. Be aware that travel insurance will not pay out if you become sick or injured due to an existing condition that you didn’t tell them about first. You will have to pay an increased premium for cover of an existing issue, but finding yourself with no cover and far from home is something you want to avoid.

Make sure that you have sufficient of any medicine that you need. Include some extra in case of an unscheduled delay in returning home. I also usually pack a few supplies of common remedies - especially if I am going somewhere that is not English speaking. I have a vivid memory of acting out an allergic reaction to a Polish pharmacist to remind me of this!

I also prepare for the work I’ll be doing on the road. In addition to preparing for the talks or workshops I might be giving, I prepare for work on Perch or for the business. I organize my to-do list to prioritize tasks that are difficult to do on the road, and make sure they are done before I go. I push tasks into the travel period that I find easier on the small screen of my laptop, or that I can complete even in a distracting environment.

When booking travel, give yourself plenty of time. If you are short of time then every delay becomes stressful, and stress is tiring. Get to the airport early. Plan longer layovers than the 70 minutes your airline believes it will take you to deplane from the first flight and make it round a labyrinthine nightmare from the 1980s to find the next one. On the way home from Nashville, my first plane was delayed due to the inbound flight having to change equipment. The three-hour layover I had chosen meant that even with almost two hours of delay I still made my transatlantic leg home in time. Travel is a lot less stressful if you allow enough time for things to go wrong.

Air travel tips

Try to fly with the same airline or group in order to build up your frequent flyer status. Even a little bit of “status” in an airline miles program will give you some perks, and often priority for upgrades and standby tickets.

If you want to take anything of significant size onto the aircraft as hand luggage, the large roller bags are often picked out to be gate-checked on busy flights. I travel with a Tom Bihn Aeronaut bag, which I can carry as a backpack. It is huge, but the gate staff never spot it and due to being soft-sided, it can squash into the overhead compartments on the smaller planes that are used for internal U.S. flights.

Have in your carry-on an overnight kit in case your checked luggage does not make it to your destination at the same time as you do. Most of the time you’ll find your bag comes in on the next flight and will be sent to your hotel, but if you need to get straight to an event it adds stress to be unable to change or brush your teeth.

If you plan to work on the flight, charge your laptop and devices whenever you can. More and more planes come with power these days - even in economy - but it can’t be relied on. I have a BatteryBox, a large external battery. It’s a bit heavy but means I can work throughout a 10-hour flight without needing to plug in.

On the subject of batteries, airlines are becoming increasingly and understandably concerned about the fire risk posed by lithium ion batteries. Make sure you keep any spare batteries in your hand luggage and remove them if your bag is gate-checked. Here is the guide issued by British Airways on the subject.

A small flat cool bag, even without an icepack, works for a good amount of time to cool food you are bringing from airside as an alternative to the strange offerings onboard. I usually pop a cold water bottle in with it. London Heathrow T5 has a Gordon Ramsay “Plane Food” restaurant that will make you a packed lunch in a small cool bag to take on the plane!

Get lounging

Airport lounges are an oasis. Something I didn’t realize when I started traveling is that many airport lounges are pay on entry rather than being reserved for people with higher class tickets or airline status. If you have a long layover then the free drinks, wifi, power, and snacks will be worth the price - and if it means you can get work done you can be making money. The LoungeBuddy app can help you locate lounges that you can access whether you have airline status or not.

There is another secret to airline lounges: they often have a hotline to the airline and can sort out your travel issues if your flight is delayed or canceled. With the delayed flight in my last trip I checked myself into the American Airlines lounge, mentioning my delay and concern for the ongoing leg of the flight. The member of staff on the desk had the flight status checked and put me on standby for another flight “just in case.” She then came to let me know - while I happily sat working in the lounge - that it all looked as if it would resolve in time for me to make my flight. Once again, far less stressful than trying to work this out myself or standing in a long line at the desk in the airport.

Looking after yourself

If you do one or two trips a year then you should just relax and enjoy them - eat all the food, drink the drinks, go to the parties and forget about your regular exercise routine. If you go to more than 20, you won’t be able to do that and also do anything else. I quickly learned how to pace myself and create routines wherever I am that help to bring a sense of normal life to hotel living.

I try as much as possible to eat the same sort of food I usually eat for the majority of the time - even if it does mean I’m eating alone rather than going out for another dinner. Hotel restaurants are used to the fussiest of international travelers and will usually be able to accommodate reasonable requests. I do a quick recce of possible food options when I arrive in a location, including places I can cobble together a healthy packed lunch if the conference food is not my thing. I’ll grab a sparkling water from the free bar rather than another beer, and I’ll make use of the hotel gym or go for a run to try and keep as much as possible to the training routine I have at home. I do enjoy some great meals and drinks with friends - I just try not to make that something that happens every night, then I really enjoy those I do get to.

I’m fortunate to not need a lot of sleep, however I try to get the same amount I would at home. I’ve also learned not to stress the time differences. If I am doing trips that involve the East and West Coast of America I will often just remain on East Coast time, getting up at 4am rather than trying to keep time-shifting back and forth. If you are time-shifting, eating at the right time for where you are and getting outside into the light can really help. The second point is not always easy given the hotel-basement nature of many conference venues. I tend to run in the morning to remind myself it is daytime, but just getting out for a short walk in the daylight before heading into the event can make a huge difference.

I take care to wash my hands after greeting all those conference-goers and spending time in airports and other places, and am a liberal user of wet wipes to clean everything from my plane tray table to the hotel remote control. Yes, I look like a germaphobe, however I would hate to have to cancel a talk because I got sick. Taking a bit of care with these things does seem to make a huge difference in terms of the number of minor illnesses I pick up.

Many of us in this industry are introverts and find constant expectation to socialize and be available tiring. I’m no exception and have learned to build alone time into my day, which helps me to be more fully present when I am spending time with other speakers and attendees. Even as a speaker at an event, when I believe it is very important for me to be available to chat to attendees and not to just vanish, this is possible. Being at a large number of events I often have seen the talks given by other speakers, or know I can catch them at the next event. So I will take some time to work or relax during a few sessions in order to make myself available to chat during the breaks.

If you are taking extended trips of two weeks or more these can be hugely disruptive to elements of your life that are important to your wellbeing. That might be in terms of being unable to attend your place of worship, meet with a therapist, or attend a support group meeting. With some thought and planning you may be able to avoid this becoming an additional source of stress - can you find a congregation in your location, use Skype to meet with your therapist, or touch base with someone from your group?

Working on the road

Once at your destination, getting set up to work comfortably makes a huge difference to how much you can get done. Being hunched over a laptop for days will leave you tired and in pain. My last trip was my first with the new and improved Roost Stand, along with an external Apple keyboard and trackpad. The Roost is amazing; it is incredibly light and allowed me to get the laptop to a really great position to work properly.

Plan your work periods in advance and be aware of what you can do with no, or limited internet connectivity. In OmniFocus I have a Context to flag up good candidates for offline work, and I also note what I need to have in order to do that work. I might need to ensure I have a copy of some documentation, or to have done a git pull on a repository before I head into the land of no wifi. I use Dash for technical documentation data sets when offline. On a ten-hour flight with no wifi you soon realize just how much stuff you look up every day!

If traveling to somewhere that is going to be horribly expensive for phone data, do some research in advance and find out how to get a local pay-as-you-go sim card. If you want to switch that in your phone, you need to have an unlocked phone (and also the tools to open your phone). My preferred method is to put the card into a mobile broadband modem, then connect my phone to that with the wifi. This means I can still receive calls on my usual number.

The possibility of breaking, losing, or having your laptop stolen increases when it isn’t safely on your desk in the office. Have good insurance, but also good backups. During conferences, we often switch off things like Dropbox or our backup service in order to preserve the wifi for everyone - don’t forget you have done this! As soon as you are able, make sure your backups run. My aim is always to be in a position where if I lost my laptop, I could walk into a store, buy a new one and be up and running within a few hours without losing my work, and especially the things I need to present.

Enjoy the world!

Don’t forget to also plan a little sightseeing in the places you go. I would hate to feel that all I ever saw of these places was the airport, hotel, and conference room. I love to book myself on a walking tour. You can discover a lot about a city in a few hours the morning before your flight out, and there are always other lone business travelers on these tours. I check Trip Advisor for reviews to find a good tour. Lonely Planet have “Top things to do in…” guides for many cities: here is the guide for Paris. I’ll pick off one item that fits into the time I have available and head out for some rapid tourism. As a runner I’m also able to see many of the sights by planning my runs around them!

Those of us to get to travel, who have the privilege of doing a job that can truly be done from anywhere, are very lucky. With a bit of planning you can enjoy travel, be part of events, and still get work done and remain healthy. By reducing stressful events you do have control over, you can be in better shape to deal with the inevitable times you do not.

News stories from Wednesday 13 July, 2016

Favicon for Symfony Blog 09:07 New in Symfony 3.2: User value resolver for controllers » Post from Symfony Blog Visit off-site link

Contributed by
Iltar van der Berg
in #18510.

In Symfony applications, controllers that make use of the base Controller class can get the object that represents the current user via the getUser() shortcut:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
use Symfony\Bundle\FrameworkBundle\Controller\Controller;

class DefaultController extends Controller
{
    public function indexAction()
    {
        $user = $this->getUser();
       // ...
    }
}

In the past, you could also get the current request object with the getRequest() shortcut, which was deprecated in Symfony 2.4 in favor of the Request type-hint:

1
2
3
4
5
6
7
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
use Symfony\Component\HttpFoundation\Request;

class DefaultController extends Controller
{
    public function indexAction(Request $request) { ... }
}

In Symfony 3.2, we've added a new user resolver that allows to get the current user in any controller via type-hinting and we deprecated the Controller::getUser() shortcut, which will be removed in Symfony 4.0:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use Symfony\Bundle\FrameworkBundle\Controller\Controller;
use Symfony\Component\Security\Core\User\UserInterface;

class DefaultController extends Controller
{
    // when the user is mandatory (e.g. behind a firewall)
    public function fooAction(UserInterface $user) { ... }

    // when the user is optional (e.g. can be anonymous)
    public function barAction(UserInterface $user = null) { ... }
}

This feature uses the argument resolver extension mechanism that was introduced in Symfony 3.1. This mechanism allows to register your own value resolvers for controller arguments.


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Tuesday 12 July, 2016

Favicon for A List Apart: The Full Feed 16:00 Strategies for Healthier Dev » Post from A List Apart: The Full Feed Visit off-site link

Not too long ago, I was part of a panel at the launch event for TechLadies, an initiative that encourages women to learn to code. Along the way, I mentioned a bit about my background as an athlete. As we were leaving to go home, the woman next to me jokingly asked if I was a better basketball player or a better developer. Without missing a beat, I said I was a better basketball player. After all, I’ve been playing basketball for over half my life; I’ve only been coding for two and a half years.

We’ve probably all come across the stereotype of the nerdy programmer who is all brains and no brawn. I’m a counterexample of that cliché, and I personally know developers who are avid cyclists or marathon runners—even a mountain climber (the kind who scales Mount Everest). And yet a stereotype, “a widely held but fixed and oversimplified image,” often comes into existence for a reason. Think of Douglas Coupland’s Microserfs. Think of any number of mainstream dramas featuring wan (usually white, usually male) programmers staring at screens. Many so-called knowledge workers are too sedentary. Our lives and work stand to benefit if we become less so.

Now, no one likes to suffer. And yet when it comes to exercise or training, it’s too easy for us to think that fitness is all about self-discipline—that we just need to have the willpower to persevere through the agony. But that’s not a good strategy for most people. Unless you genuinely find pleasure in pain and suffering, you have to want something badly enough to endure pain and suffering. Ask any athlete if they enjoy running extra sprints or lifting extra weights. Even Olympic medalists will tell you they don’t. They do it because they want to be the best.

My point is this: forcing yourself to do something you don’t enjoy is not sustainable. I’ll be the first to admit that I’m not a big fan of running. A little ironic coming from someone who used to play basketball full-time, maybe, but the only reason I did any running at all, ever, was because competitive basketball required me to. When I stopped training full-time, I simply couldn’t muster the energy or motivation to get up and run every day (or even every week, for that matter).

So I had to come up with a different game plan—one that required minimal effort, near-zero effort, and minor effort. You can do it, too. No excuses. Ready?

Minimal effort

I’m lazy.

I’m pretty good at talking myself out of doing things that require extra effort to get ready for. For example, going swimming requires that I pack toiletries, a fresh set of clothes, and goggles. Then I actually need to make it to the pool after work before it closes, which means I have to plan to leave the office earlier than I usually might, and so on. Guess what? Eight out of ten times, I end up telling myself to go swimming next time.

By contrast, I commute to work on my bicycle. Yes, it helps that I love to ride. I thoroughly enjoy swimming, too—just not enough to overcome my laziness. But because cycling is my main mode of transportation, I don’t even think about it as exercise. It’s just something I do as part of my day, like brushing my teeth.

The “while-you’re-at-it” technique works very well for me, and maybe it’ll work for you, too. In a nutshell: build healthy habits into things you already do. Kind of how parents hide vegetables in more palatable stuff to get their kids to eat them.

Near-zero effort

Let me list some simple activities that involve minimal effort, but have significant returns on investment. Consider these the minimum viable products (MVPs) of healthy habits.

Drink more water

Most of us have been told to drink eight glasses of water a day, but how many of us actually drink that much? The real amount of water people need on a daily basis seems debatable, but I’m going to make the bold assumption that most of us don’t drink more than one liter (or around four glasses) of water a day. And no, coffee doesn’t count.

This means that most of us operate in a mildly dehydrated state throughout the day. Studies done on both men and women have shown that mild dehydration negatively impacts one’s mood and cognitive function. Given that our work requires significant mental acuity, upping our water intake is a minimal-effort lifehack with significant benefits.

Note that people often mistake thirst for hunger. Studies have shown that we’re notoriously bad at distinguishing the two. Assuming that most of us probably don’t drink enough water throughout the day, odds are that you’re not really hungry when you reach for a snack. In fact, you’re probably thirsty. Don’t grab a can of soda, though—drink water.

Move more

A study done on the effects of sedentary behavior revealed that long periods of inactivity increase one’s risk of diabetes and heart disease. The study also mentioned that encouraging individuals simply to sit less and move more, regardless of intensity level, may improve the effectiveness of diabetes-prevention programs.

Think about how you can incorporate more movement into your routine. Try drinking water throughout the day. Not only will this reinforce the “drink more water” habit, but you’ll also find that you need to get up to go to the bathroom more often. And going to the bathroom is…movement. Note: do not refuse to go to the bathroom because you think you’re “on the brink” of solving a bug. That’s a lie you tell yourself.

Since you’re getting up and sitting down more often, you might as well sneak some exercise in while you’re at it. Instead of plonking down in your seat when you get back, lower yourself slowly over the course of five seconds until your butt touches your chair. You’re building leg muscles! Who needs a gym? The point is, all the little things you do to increase movement add up.

Don’t eat while you work

It might surprise you to know that being aware of what you put in your mouth—and when you put it there—makes a difference. I know many people, not only developers, who eat lunch at their desks, balancing a spoonful of food in one hand while continuing to type with the other. Lunch becomes something that’s shoveled into our mouths and (maybe, if we have time) swallowed. That’s no way to appreciate a meal. Make lunchtime a logical break between your coding sessions. Some folks may protest that there’s just no time to eat: we have to code 20 hours a day!

First of all, it’s impossible to be efficient that way. A study (PDF) from the University of Illinois at Urbana-Champaign has shown that taking a deliberate break can reboot focus on the task at hand. It offsets our brain’s tendency to fall into autopilot, which explains why we can’t come up with good solutions after continuously staring at a bug for hours. Tom Gibson wrote a beautiful post explaining how human beings are not linear processes. We are still operating on an industrial model where emphasis is placed on hours worked, not output achieved.

We need to aim for a healthy “Work Rate Variability” and develop models of working that stop making us ill, and instead let us do our best.
Tom Gibson

Also, by actually bothering to chew your food before swallowing, you eat more slowly. Research has shown that eating slowly leads to lower hunger ratings and increased fullness ratings. Chances are you’ll feel healthier overall and gain a fresh sense of perspective, too, by giving yourself a proper lunch break. Such is the power of minimal effort.

Use a blue-light filter at night

Personally, I’m a morning person, but most of my developer friends are night owls. Everybody functions best at different times of the day, but if you’re someone who operates better at night, I recommend installing f.lux on your desktop and mobile devices. It’s a tiny application that makes the color of your computer’s display adapt to ambient light and time of day.

Melatonin is a hormone that helps maintain the body’s circadian rhythms, which determine when we sleep and wake up. Normally, our bodies produce more melatonin when it gets dark. Scientists have found that exposure to room light in the evening suppresses melatonin during normal sleep hours. Research on the effects of blue light has shown that blue light suppresses sleep-associated delta brainwaves while stimulating alertness. Because it doesn’t make sense, given socioeconomic realities, to ask people to stop working at night, the best alternative is to reduce exposure to blue light.

Minor effort required

If you’ve already started incorporating zero-effort health habits into your life, and feel like putting in a bit more effort, this section outlines tactics that take a little more than zero effort.

Walk

When I started writing code, I found myself glued to my chair for hours on end. You know that feeling when you’re debugging something and obstinately refuse to let that bug get the better of you? But I realized that my efficiency decreased the longer I worked on something without stopping. I can’t tell you how many times I worked on a bug till I threw my hands up in frustration and went for a walk, only to have the solution come to me as I strolled outside enjoying the breeze and a change of scenery.

Walking doesn’t require any additional planning or equipment. Most of us, if we’re lucky, can do it without thinking. The health benefits accrued include a reduction of chronic diseases like stroke and heart disease. Try this: as part of your attempt to have a better lunch break, take a walk after you’ve properly chewed and swallowed your lunch. It limits the increase of blood sugar levels immediately after a meal. You’ll get fitter while you’re at it.

Stretch

I don’t know about you, but sitting for long periods of time makes my hips feel tight and my back tense up. The scientific research on the exact effects of sitting on the structural integrity of your hip flexors seems to be inconclusive, but I know how I feel. A lot of us tend to slouch in our chairs, too, which can’t be good for our overall posture.

If you find yourself craning your neck forward at your desk, with your shoulders up near your ears and back rounded forward, news flash! You have terrible posture. So what can you do about it? Well, for starters, you can refer to a handy infographic from the Washington Post that summarizes the ills of bad posture. The TL;DR: bad posture negatively affects your shoulders, neck, hips, and especially your back.

Slouching for prolonged periods causes the soft discs between our vertebrae to compress unevenly. If you take a sponge and place a weight on one side of it and leave it there for hours, the sponge will warp. And that’s exactly what happens to our discs. As someone who has suffered from a prolapsed disc, I can tell you that back trouble no fun at all.

Here’s another thing you can do: stretch at your desk. You don’t have to do all of these exercises at once—just sprinkle them throughout your work day. The improved blood circulation will be a boon for your brain, too.

Sleep

Most of us don’t get enough sleep. I hardly know anyone under the age of 12 who goes to bed before 11 p.m. Maybe that’s just the company I keep, but there are lots of reasons for not getting enough sleep these days. Some of us work late into the night; some of us game late into the night. Some of us care for children or aging parents, or have other responsibilities that keep us up late. I live in Singapore, which ranks third on the list of cities clocking the fewest hours of sleep: six hours and 32 minutes.

Sleep deprivation means more than just yawning all the time at work. Research has shown that the effects of sleep deprivation are equivalent to being drunk. Insufficient sleep affects not only your motor skills, but also your decision-making abilities (PDF) and emotional sensitivity (PDF). You become a dumb, angry troll when sleep-deprived.

Changing your sleep habits takes some effort. The general advice is to sleep and wake up at the same time each day, and to try to aim for seven and a half hours of sleep. According to Professor Richard Wiseman, a psychology professor at the University of Hertfordshire, our sleep cycles run in 90-minute intervals. Waking up in the middle of those cycles makes us groggy. Wiseman offers tips on how to sleep better.

Resistance training

By “resistance training,” I don’t mean hefting iron plates and bars at the gym (though if you like to do that, more power to you). If you enjoy the privilege of able-bodiedness, try to make vigorous physical movement part and parcel of your daily life. Ideally, you’ll have the basic strength and coordination to run and jump. And to be able to get right up without much effort after falling down. You don’t have to be an elite athlete—that’s a genetic thing—but with luck, you’ll be able to perform at least some basic movements.

Our own body weight is plenty for some rudimentary exercises. And it doesn’t matter if the heaviest weight you’re willing to lift is your laptop and you couldn’t do a push-up if your life depended on it. There are progressions for everyone. Can’t do a push-up on the ground? Do a wall push-up instead. Can’t do a basic squat? Practice sitting down on your chair very slowly. Can’t run? Take a walk. (Yes, walking is a form of resistance training). And so on.

There are two websites I recommend checking out if you’re interested in learning more. The first is Nerd Fitness by Steve Kamb. He and I share a similar philosophy: small changes add up to big results. He covers topics ranging from diet to exercise and offers lots of resources to help you on your journey. Another site I really love is GMB fitness. It teaches people how to move better, and to better understand and connect with their bodies.

Wrapping up: slow & steady

There is only one way to build new habits: consistency over time. That’s why it’s so important to do things that take minimal effort. The less effort an action requires, the more likely you are to do it consistently. Also: try not to make drastic changes to all aspects of your life at once (though that may be effective for some). Regardless of whether you mind change in your life or not, almost any change introduces stress to your system. And even constant low-grade stress is detrimental. It’s better to start small, with minor changes that you barely feel; once that becomes a habit, move on to the next change.

We spend hours maintaining our code and refactoring to make it better and more efficient. We do the same for our computers, optimizing our workflows and installing tweaks to eke out those extra seconds of performance. So it’s only right that we put a little effort into keeping our bodies reasonably healthy. Fixing health problems usually costs more than fixing bugs or machines—and often the damage is irreversible. If we want to continue to write great code and build cool products, then we should take responsibility for our health so that we can continue to do what we love for decades to come.

Favicon for Symfony Blog 08:58 New in Symfony 3.2: Lazy loading of form choices » Post from Symfony Blog Visit off-site link

Contributed by
Jules Pietri
in #18332.

ChoiceType is the most powerful Symfony form type and it's used to create select drop-downs, radio buttons and checkboxes. In Symfony 3.2 we added a new feature to improve its performance: lazy loading the choice values.

First, define the choice_loader option for the ChoiceType and then, use the new CallbackChoiceLoader class to set the PHP callable executed to get the list of choices:

1
2
3
4
5
6
7
8
use Symfony\Component\Form\ChoiceList\Loader\CallbackChoiceLoader;
use Symfony\Component\Form\Extension\Core\Type\ChoiceType;

$builder->add('constants', ChoiceType::class, [
    'choice_loader' => new CallbackChoiceLoader(function() {
            return StaticClass::getConstants();
    },
]);

The CallbackChoiceLoader class implements ChoiceLoaderInterface, which is now also implemented in every ChoiceType subtype, such as CountryType, CurrencyType, LanguageType, LocaleType and TimezoneType.


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Monday 11 July, 2016

Favicon for Symfony Blog 08:48 New in Symfony 3.2: Better readability for YAML numeric literals » Post from Symfony Blog Visit off-site link

Contributed by
Baptiste Clavié
in #18486.

Long numeric literals, being integer, float or hexadecimal, are known for their poor readability in code and configuration files:

1
2
3
4
5
parameters:
    credit_card_number: 1234567890123456
    long_number: 10000000000
    pi: 3.141592653589793
    hex_words: 0xCAFEF00D

In Symfony 3.2, YAML files added support for including underscores in numeric literals to improve their readability:

1
2
3
4
5
parameters:
    credit_card_number: 1234_5678_9012_3456
    long_number: 10_000_000_000
    pi: 3.14159_26535_89793
    hex_words: 0x_CAFE_F00D

During the parsing of the YAML contents, all the _ characters are removed from the numeric literal contents, so there is not a limit in the number of underscores you can include or the way you group contents.

This feature is defined in the YAML specification and it's widely supported in other programming languages (Java, Ruby, Rust, Swift, etc.).

Deprecating comma separators in floats

Contributed by
Christian Flothmann
in #18785.

The new underscore separator made us think about the need of the comma to separate the float number contents. Ultimately we decided to deprecate it, so starting from Symfony 3.2, the use of the comma to separate contents is deprecated:

1
2
3
4
5
6
7
8
9
parameters:
    # deprecated since Symfony 3.2
    foo: 1,230.15

    # equivalent without the comma separator
    foo: 1230.15

    # equivalent with the underscore separator
    foo: 1_230.15

Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Sunday 10 July, 2016

Favicon for Symfony Blog 11:52 A week of symfony #497 (4-10 July 2016) » Post from Symfony Blog Visit off-site link

This week Symfony focused on fixing issues and tweaking existing features. Meanwhile, we continued blogging about the new Symfony 3.2 features. Lastly, the SymfonyLive Chicago Call for Papers was announced while the Call for Papers for SymfonyLive London and SymfonyCon Berlin will finish next week.

Symfony development highlights

2.7 changelog:

  • 0259bed: [Validator] UuidValidator must accept a Uuid constraint
  • 7c39ac1: [ClassLoader] fixed declared classes being computed when not needed
  • 41d6758: [Form] fixed bug in ButtonBuilder name
  • b7ed32a: [Validator] added additional MasterCard range to the CardSchemeValidator
  • b795cfa: [HttpKernel] fixed internal subrequests having an if-modified-since-header

2.8 changelog:

  • 4e7cc3b: [VarDumper] fixed indentation trimming in ExceptionCaster
  • 1f70837: [VarDumper] fixed missing usage of ExceptionCaster::$traceArgs
  • f8d3ef7: [DoctrineBridge] added missing error code for constraint
  • 0bac08a: [Security] fixed deprecated usage of DigestAuthenticationEntryPoint::getKey() in DigestAuthenticationListener
  • b795cfa: [HttpKernel] fixed internal subrequests having an if-modified-since-header

3.1 changelog:

  • ab8c2c7: [HttpKernel] clarified deprecation of non-scalar values in surrogate fragment renderer

Newest issues and pull requests

They talked about us


Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Friday 08 July, 2016

Favicon for Symfony Blog 10:03 New in Symfony 3.2: Console Improvements (Part 2) » Post from Symfony Blog Visit off-site link

In this second of a two-part series, we introduce four additional new features added by Symfony 3.2 to the Console component to improve its DX (developer experience).

Introduced a new Terminal class

Console's Application class defines several methods to get the dimensions (height and width) of the terminal window:

1
2
3
4
5
6
use Symfony\Component\Console\Application;

$application = new Application();
$dimensions = $application->getTerminalDimensions(); // [$width, $height]
$height = $application->getTerminalHeight();
$width = $application->getTerminalWidth();

Technically, getting this information for all kinds of terminals and operating systems is a complex, convoluted, slow and error prone process. In Symfony 3.2 we decided to move all this logic into a new Terminal class:

1
2
3
4
use Symfony\Component\Console\Terminal;

$height = (new Terminal())->getHeight();
$width = (new Terminal())->getWidth();

In addition, we improved the logic to get/set the terminal dimensions to prioritize the use of environment variables. If the COLUMNS and LINES environment variables are defined, Terminal uses their values to get the dimensions. When setting the terminal dimensions, Terminal creates or updates the values of those variables.

This new Terminal class will be used in the future to get/set more information about the terminal besides its dimensions. For now, these changes have allowed us to fix some edge cases in the progress bar helper when the terminal window was small.

Introduced a new StreamableInputInterface

Contributed by
Robin Chalas
in #18999.

In Symfony 2.8 we introduced a new style guide for console commands that simplifies creating consistent-looking commands. However, these commands were hard to test, specially when using the ask() helper to ask for user's input.

In Symfony 3.2 we've introduced a new StreamableInputInterface and made the abstract Symfony\Component\Console\Input\Input implement it. This change allows to centralize the management of the input stream in a single class and makes the QuestionHelper related code easier to test.

Added a hasErrored() method in ConsoleLogger

Contributed by
Nicolas Grekas
in #19090.

In Symfony 3.2, the ConsoleLogger class includes a hasErrored() method that returns true as soon as one message of ERROR level has been logged. This way you don't have to add any custom logic to decide whether your command should return an error exit code (exit(1)) or not.

Added a "Lockable" trait

Contributed by
Geoffrey Brier
in #18471.

In Symfony 2.6 we introduced a lock handler to provide a simple abstraction to lock anything by means of a file lock. This lock handler is mainly used to avoid concurrency issues preventing multiple executions of the same command:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use Symfony\Component\Filesystem\LockHandler;

class UpdateContentsCommand extends Command
{
    protected function execute(InputInterface $input, OutputInterface $output)
    {
        $lock = new LockHandler('update:contents');
        if (!$lock->lock()) {
            // manage lock errors
        }

        // ...
    }
}

In Symfony 3.2 we made the lock handler a bit easier to use thanks to the new LockableTrait. This trait provides a lock() method that creates a non- blocking lock named after the current command:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
use Symfony\Component\Console\Command\LockableTrait;

class UpdateContentsCommand extends Command
{
    use LockableTrait;

    protected function execute(InputInterface $input, OutputInterface $output)
    {
         if (!$this->lock()) {
             // manage lock errors
         }

        // ...
    }
}

You can also create locks with custom names and even blocking locks that wait until any existing lock is released:

1
2
3
4
if (!$this->lock('custom_lock_name')) { ... }

// the second boolean argument tells whether the lock is blocking or not
if (!$this->lock('custom_lock_name', true)) { ... }

Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Thursday 07 July, 2016

Favicon for Symfony Blog 10:20 New in Symfony 3.2: Console Improvements (Part 1) » Post from Symfony Blog Visit off-site link

The Console component will receive a lot of new features in Symfony 3.2, mostly related to improving its DX (developer experience). In this first of a two-part series, we introduce four of those new features.

Command aliases are no longer displayed as separate commands

Contributed by
Juan Miguel Rodriguez
in #18790.

Best practices recommend to define namespaced commands to avoid collisions and improve your application organization. However, for frequently executed commands, it's convenient to define shortcuts:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class VeryLongNameCommand extends ContainerAwareCommand
{
    protected function configure()
    {
        $this
            ->setName('app:very:long:name')
            ->setDescription('Lorem Ipsum...')
            // ...
            ->setAliases(['foo'])
        ;
    }

    // ...
}

In the above example, the command can be executed as ./bin/console app:very:long:name and as ./bin/console foo. Although there is just one command, Symfony will show it as two separate commands:

1
2
3
4
5
6
$ ./bin/console

Available commands:
  foo                      Lorem Ipsum...
 app:
  app:very:long:name       Lorem Ipsum...

In Symfony 3.2 aliases are now inlined in their original commands, reducing the clutter of the console output:

1
2
3
4
5
$ ./bin/console

Available commands:
 app:
  app:very:long:name       [foo] Lorem Ipsum...

Errors are now displayed even when using the quiet mode

Contributed by
Olaf Klischat
in #18781.

If you add the -q or --quiet option when running a Symfony command, the output is configured with the OutputInterface::VERBOSITY_QUIET level. This makes the command to not output any message, not even error messages.

In Symfony 3.2 we've improved the -q and --quiet options to keep suppressing all the output except for the log messages of Logger::ERROR level. This way you'll never miss an error message again.

Better support for one command applications

Contributed by
Grégoire Pineau in #16906.

Building a single command application in Symfony is possible but it requires you to make some changes to not pass the command name continuously. In Symfony 3.2 we've improved the base Application class to support single command applications out-of-the-box.

First, define a command as usual and create the console application. Then, set the only command as the default command and pass true as the second argument of setDefaultCommand(). That will turn the application into a single command application:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
use Symfony\Component\Console\Application;

$command = new \FooCommand();

$application = new Application();
$application->add($command);
// the second boolean argument tells if this is a single-command app
$application->setDefaultCommand($command->getName(), true);

// this now executes the 'FooCommand' without passing its name
$application->run();

Simpler command testing

Contributed by
Robin Chalas
in #18710.

Testing a Symfony command is unnecessarily complex and it requires you to go deep into PHP streams. For example, if your test needs to simulate a user typing 123, foo and bar, you have to do the following:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
use Symfony\Component\Console\Tester\CommandTester;

$commandTester = new CommandTester($command);
$helper = $command->getHelper('question');
$helper->setInputStream($this->getInputStream("123\nfoo\nbar\n"));

protected function getInputStream($input)
{
    $stream = fopen('php://memory', 'r+', false);
    fputs($stream, $input);
    rewind($stream);
    return $stream;
}

In Symfony 3.2 we've simplified command testing by adding a new setInputs() method to the CommandTester helper. You just need to pass an array with the contents that the user would type:

1
2
3
4
use Symfony\Component\Console\Tester\CommandTester;

$commandTester = new CommandTester($command);
$commandTester->setInputs(['123', 'foo', 'bar']);

Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Wednesday 06 July, 2016

Favicon for Symfony Blog 09:42 New in Symfony 3.2: Routing Improvements » Post from Symfony Blog Visit off-site link

Added support for URL fragments

Contributed by
Rhodri Pugh
in #12979.

The fragment identifier is the optional last part of a URL that starts with a # character and it's used to identify a portion of a document. This URL element is increasingly popular because some applications use it as a navigation mechanism. For that reason, Symfony 3.2 allows to define the fragment when generating a URL thanks to a new reserved routing property called _fragment:

1
2
3
4
5
// generating a regular URL (/settings)
$this->get('router')->generate('user_settings');

// generating a URL with a fragment (/settings#password)
$this->get('router')->generate('user_settings', ['_fragment' => 'password']);

This _fragment option can also be used when defining the route in any of the formats supported by Symfony:

1
2
3
4
/**
 * @Route("/settings", defaults={"_fragment" = "password"}, name="user_settings")
 */
public function settingsAction() { ... }

Added support for array values in XML routes

Contributed by
Christian Flothmann
in #11394.

XML is not one of the most popular formats to define routes in Symfony applications. In addition to its verbosity, it lacks some features from other formats, such as using arrays to define the default routing values:

1
2
3
4
5
6
7
8
<routes>
    <route id="blog" path="/blog/{page}">
        <default key="_controller">AppBundle:Blog:index</default>
        <!-- you can't define the type of the 'page' property and you can't
             use an array as the value of a '<default>' element -->
        <default key="page">1</default>
    </route>
</routes>

In Symfony 3.2 we decided to improve the XmlFileLoader class of the Routing component to allow defining the variable type of any <default> element:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
<routes>
    <route id="blog" path="/blog/{page}">
        <default key="_controller">
            <string>AppBundle:Blog:index</string>
        </default>
        <default key="page">
            <int>1</int>
        </default>
    </route>
</routes>

Now you can also use arrays as the value of any <default> element (using <list> for scalar arrays and <map> for associative arrays):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<routes>
    <route id="blog" path="/blog/{page}">
        <default key="_controller">
            <string>AppBundle:Blog:index</string>
        </default>
        <default key="page">
            <int>1</int>
        </default>
        <default key="values">
            <map>
                <bool key="public">true</bool>
                <int key="page">1</int>
                <float key="price">3.5</float>
                <string key="title">foo</string>
            </map>
        </default>
    </route>
</routes>

Be trained by Symfony experts - 2016-07-25 Online America - 2016-08-01 Paris - 2016-08-01 Paris

News stories from Tuesday 05 July, 2016

Favicon for A List Apart: The Full Feed 16:00 Create an Evolutionary Web Strategy with a Digital MRO Plan » Post from A List Apart: The Full Feed Visit off-site link

Many organizations, large and small, approach creating their web presence as if it’s a one-time project. They invest an enormous amount of time and money in a great web design, content strategy, and technical implementation; and then they let the website sit there for months and even years without meaningful updates or enhancements. When the web presence becomes so out of date it’s barely functional, it becomes clear to them that the site needs a refresh (or more likely another full redesign).

Redesigns are great. But there’s a better way: ensure your client has a website that continually adapts to their needs.

Equip your client with a framework that helps them with ongoing management of their web presence. This plan also ensures you continue to build a strong relationship over the long term. It’s called an MRO plan.

MRO stands for Maintenance, Repair, and Overhaul. It’s a term most often used with building facilities or machinery.

A house is a machine for living in.
Le Corbusier

Everyone knows that a building or a piece of heavy machinery needs a regular maintenance plan. Buildings and machines are complex systems that need tuning and maintenance. Websites are also complex systems. You could say, “A website is a machine for engagement.” To keep that engagement running smoothly, your client needs a plan that includes regular maintenance along with content and feature updates.

The problem with the curve

Typically, websites undergo waves of full redesign, neglect, failure, full redesign. Think of it as a series of bell curves dipping into the negative between revolutionary overhauls.

The revolution approach to managing your web presence.

The revolution approach to managing your web presence.

Your client comes to you with an initial big push to deliver a new web design and content strategy, something that they will be able to manage without your assistance. And you provide that. But once you walk away, the website stops evolving.

During this time, the client’s products or services may evolve, and they may adapt their product-based content to changes in their market—but they don’t touch the website. Like old bread, their website gets stale until the day comes when it’s clear that it needs to be fixed ASAP. That’s when you get the call. There’s a huge drive to do a website redesign, and a big new project is kicked off.

You finish the project and walk away. Again.

But this is a mistake. It’s smarter to show your client how to implement a plan that protects their investment in their website. It’s smarter for the client, and it’s smarter for you too because it allows you to develop an ongoing relationship that ensures you have recurring revenue over a longer period.

Convince your client to break this endless cycle of big, expensive redesign projects every few years. Show them that they need to manage their website the same way they manage product development–by consistently and regularly monitoring and managing their web experience, focusing on ongoing maintenance, interim updates, and major overhauls when needed.

Think evolution not revolution

A digital MRO plan provides continual investment so websites can evolve in a more consistent manner over time–evolution versus revolution. The evolutionary approach requires your client to regularly update their website based on how their company, the industry, and their customer data is changing.

The revolution approach to managing your web presence.

An MRO program for a web presence–the evolution approach.

Define an MRO framework for your client with three phases:

  1. Maintenance: This is the phase that occurs over a long period, with regular monitoring of web pages, content assets, and other resources in addition to functionality. The maintenance phase is about fixing small things, making small changes or updates that don’t require major work on the website. How you can help: Outline a regular maintenance plan where issues are documented and then packaged together into maintenance updates. In some cases, these fixes are content-based, in other cases they are functionality bugs or small updates that need to be applied. You can work on these maintenance updates monthly or more often depending on the situation, delivering regular changes to the website to keep it up to date.
  2. Repair: Repairs are like interim updates. They may require a fair amount of changes to the website to fix a problem or implement a new concept or idea, but they don’t require a full redesign. Some examples include updating or removing a section of the website not visited often, rewriting an outdated key whitepaper, or improving the resources section. They could also include rewrites to web pages for a new version of a product, or the addition of a set of new web pages. How you can help: Whether it’s a set of web pages for a new product, or a redesign of the resources section of the website, recommend quarterly reviews of the website where you can discuss new content or functionality that can be added to the site to improve it for customers and prospects. This requires that you follow trends in both content marketing and design/development, as well as trends in the industry of the client (and their competition). Recommend “mini” projects to implement these interim updates for your client.
  3. Overhaul: During an overhaul phase it’s time for that full redesign. Maybe the client is implementing a new brand, and they need to update their website to reflect it. Maybe they need to implement a modern CMS. Overhaul projects take time and big budgets, and typically take place every five or more years. How you can help: Working with the client on a regular basis on maintenance and small repairs enables you to demonstrate your understanding of the client, their needs and their customers’ needs, proving that you are the right one to run the redesign project. Your knowledge of the industry, along with your experience with the website and the technology it lives on makes you the right choice. Recommend a full website review every four to five years to determine if a redesign is necessary, and to demonstrate how you are in the best position to complete the project successfully.

Your digital MRO plan should prioritize and align work based on the evolution of the customer’s organization or business, as well as the feedback visitors are giving on the website. Incorporating customer feedback and analytics into your MRO plan provides the insight you need to streamline engagement and helps your customer validate the return on investment from their website. You can use surveys, A/B tests, session cams, heat maps, and web analytics reports to focus on the areas of the site that need updating and prioritize projects into each phase of the MRO plan.

The benefits of an MRO program for web presence

With a solid MRO plan you can help your client manage their website like they would their products and services: with regular, consistent updates. Creating a digital MRO plan enables you to show your client how they can get more consistent, predictable ROI from their website and other digital channels and streamline their budget.

When pitching an MRO program to your client, focus on the following benefits:

  • Budget management: By following an MRO program, costs are spread over a longer period instead of a big outlay of time and money for a large project.
  • Improved customer experience: Implementing web analytics, listening posts, surveys, and feedback programs ensures the client is listening to its customers and delivering on customer needs consistently, improving website engagement.
  • Content is never out of date: Product-based content assets are updated in line with product/service improvements, ensuring the most current information is available on the website. You can also help your client plan additions to marketing content assets or add news in line with product updates.
  • Reduced costs and increased ROI: The website is a primary value driver for every business. It’s the best salesperson, the digital storefront, the manifestation of a brand, and a hub for customer services and support. Keeping the website working well will increase digital ROI and lower costs.

Perhaps the biggest benefit of an MRO plan is more successful redesigns. With an MRO program in place, clients can take the guesswork out of large redesign projects. They will have the results of years of optimization to build upon, ensuring that when they do launch the big redesign they will have real data and experience to know what will work.

Be an integral part of an MRO plan

It’s one thing to recommend and sell a client on following an MRO plan, but it’s another to ensure that you and/or your team are an integral part of that plan. Here are some suggestions on how you can build your time and budget into an MRO plan.

  1. Recommend a dedicated cross-functional digital team with time and resources allocated for the website. The team should include capabilities such as a writer, designer, and web developer. Depending on your relationship with the client, one or two of those capabilities, such as content writing/analysis or design and development, should be provided by you or your team.
  2. Schedule monthly cross-functional meetings to brainstorm, research, and validate requirements and ideas for website updates and changes. You should have access to website analytics so you can stay informed about the performance of the website. Based on these meetings, help the client package changes into maintenance or interim updates.
  3. Suggest a process and budget to handle maintenance updates based on your experience with this client and similar clients.
  4. Provide a budget for regular website design and enhancement implementation by you or your team. The scope and regularity of these enhancements will vary based on the needs of the business or organization, but plan for no less than once per quarter. Build in enough time to monitor the client’s industry and competition, as well as review website analytics and content management trends.
  5. Recommend a process for completing a full website review driven by you. This takes the burden off the client to plan and coordinate the review and ensures you are part of the review and recommendations for a redesign.

A proactive approach

For many organizations, the easy route is revolution. It seems easier because it happens only once every few years. But this tactic takes more time and costs much more money up front.

An MRO program ensures businesses are strategically managing their web presence and putting in place the ongoing resources to keep it up to date and relevant for their prospects and customers.

One of those ongoing resources is you. Build your role into the MRO program, indicating where you can provide services that support different phases of the program. Being involved on a regular basis with maintenance and interim updates demonstrates your understanding of the clients’ needs and ensures you will be the one they come to when the big redesign project happens (and it will happen).

Whether you are a single freelancer, a two-person team, or part of a larger agency, the key to building long-term, revenue-generating relationships with clients is getting them to see the value of a proactive approach for website management. An MRO program can help you do that.

News stories from Tuesday 28 June, 2016

Favicon for A List Apart: The Full Feed 16:00 The Foundation of Technical Leadership » Post from A List Apart: The Full Feed Visit off-site link

I’m a front-end architect, but I’m also known as a technical leader, subject matter expert, and a number of other things. I came into my current agency with five years of design and development management experience; yet when it came time to choose a path for my career with the company, I went the technical route.

I have to confess I had no idea what a technical leader really does. I figured it out, eventually.

Technical experts are not necessarily technical leaders. Both have outstanding technical skills; the difference is in how others relate to you. Are you a person that others want to follow? That’s the question that really matters. Here are some of the soft skills that set a technical leader apart from a technical expert.

Help like it’s your job

Your authority in a technical leadership position—or any leadership position—is going to arise from what you can do for (or to) other people. Healthy authority here stems from you being known as a tried-and-true problem-solver for everyone. The goal is for other people to seek you out, not for you to be chasing down people for code reviews. For this to happen, intelligence and skill are not enough—you need to make a point of being helpful.

For the technical leader, if you’re too busy to help, you’re not doing your job—and I don’t just mean help someone when they come by and ask for help. You may have to set an expectation with your supervisor that helping others is a vital part of a technical leader’s job. But guess what? It might be billable time—check with your boss. Even if it’s not, try to estimate how much time it’s saving your coworkers. Numbers speak volumes.

The true measure of how helpful you are is the technical know-how of the entire team. If you’re awesome but your team can’t produce excellent work, you’re not a technical leader—you’re a high-level developer. There is a difference. Every bit of code you write, every bit of documentation you put together should be suitable to use as training for others on your team. When making a decision about how to solve a problem or what technologies to use, think about what will help future developers.

My job as front-end architect frequently involves not only writing clean code, but cleaning up others’ code to aid in reusability and comprehension by other developers. That large collection of functions might work better as an object, and it’ll probably be up to you to make that happen, whether through training or just doing it.

Speaking of training, it needs to be a passion. Experience with and aptitude for training were probably the biggest factors in me landing the position as front-end architect. Public speaking is a must. Writing documentation will probably fall on you. Every technical problem that comes your way should be viewed as an opportunity to train the person who brought it to you.

Helping others, whether they’re other developers, project managers, or clients, needs to become a passion for you if you’re an aspiring technical leader. This can take a lot of forms, but it should permeate into everything you do. That’s why this is rule number one.

Don’t throw a mattress into a swimming pool

An infamous prank can teach us something about being a technical leader. Mattresses are easy to get into swimming pools; but once they’re in there, they become almost impossible to get out. Really, I worked the math on this: a queen-sized mattress, once waterlogged, will weigh over 2000 pounds.

A lot of things are easy to work into a codebase: frameworks, underlying code philosophies, even choices on what technology to use. But once a codebase is built on a foundation, it becomes nearly impossible to get that foundation out of there without rebuilding the entire codebase.

Shiny new framework seem like a good idea? You’d better hope everyone on your team knows how to use that framework, and that the framework’s around in six months. Don’t have time to go back and clean up that complex object you wrote to handle all the AJAX functionality? Don’t be surprised when people start writing unneeded workarounds because they don’t understand your code. Did you leave your code in a state that’s hard to read and modify? I want you to imagine a mattress being thrown into a swimming pool…

Failure to heed this command frequently results in you being the only person who can work on a particular project. That is never a good situation to be in.

Here is one of the big differences between a technical expert and a technical leader: a technical expert could easily overlook that consideration. A technical leader would take steps to ensure that it never happens.

As a technical expert, you’re an A player, and that expertise is needed everywhere; and as a technical leader, it’s your job to make sure you can supply it, whether that means training other developers, writing and documenting code to get other developers up to speed, or intentionally choosing frameworks and methodologies your team is already familiar with.

Jerry Weinberg, in The Psychology of Computer Programming, said, “If a programmer is indispensable, get rid of him as quickly as possible!” If you’re in a position where you’re indispensable to a long-term project, fixing that needs to be a top priority. You should never be tied down to one project, because your expertise is needed across the team.

Before building a codebase on anything, ask yourself what happens when you’re no longer working on the project. If the answer is they have to hire someone smarter than you or the project falls apart, don’t include it in the project.

And as a leader, you should be watching others to make sure they don’t make the same mistake. Remember, technology decisions usually fall on the technical leader, no matter who makes them.

You’re not the only expert in the room

“Because the new program is written for OS 8 and can function twice as fast. Is that enough of a reason, Nancy Drew?”

That’s the opening line of Nick Burns, Your Company’s Computer Guy, from the Saturday Night Live sketch with the same name. He’s a technical expert who shows up, verbally abuses you, fixes your computer, and then insults you some more before shouting, “Uh, you’re welcome!” It’s one of those funny-because-it’s-true things.

The stereotype of the tech expert who treats everyone else as inferiors is so prevalent that it’s worked its way into comedy skits, television shows, and watercooler conversations in businesses across the nation.

I’ve dealt with the guy (or gal). We all have. You know the guy, the one who won’t admit fault, who gets extremely defensive whenever others suggest their own ideas, who views his intellect as superior to others and lets others know it. In fact, everyone who works with developers has dealt with this person at some point.

It takes a lot more courage and self-awareness to admit that I’ve been that guy on more than one occasion. As a smart guy, I’ve built my self esteem on that intellect. So when my ideas are challenged, when my intellect is called into question, it feels like a direct assault on my self esteem. And it’s even worse when it’s someone less knowledgeable than me. How dare they question my knowledge! Don’t they know that I’m the technical expert?

Instead of viewing teammates as people who know less than you, try to view them as people who know more than you in different areas. Treat others as experts in other fields that you can learn from. That project manager may not know much about your object-oriented approach to the solution, but she’s probably an expert in how the project is going and how the client is feeling about things.

Once again, in The Psychology of Computer Programming, Weinberg said, “Treat people who know less than you with respect, deference, and patience.” Take it a step further. Don’t just treat them that way—think of them that way. You’d be amazed how much easier it is to work with equals rather than intellectually inferior minions—and a change in mindset might be all that’s required to make that difference.

Intelligence requires clarity

It can be tempting to protect our expertise by making things appear more complicated than they are. But in reality, it doesn’t take a lot of intelligence to make something more complicated than it needs to be. It does, however, take a great deal of intelligence to take something complicated and make it easy to understand.

If other developers, and non-technical people, can’t understand your solution when you explain it in basic terms, you’ve got a problem. Please don’t hear that as “All good solutions should be simple,” because that’s not the case at all—but your explanations should be. Learn to think like a non-technical person so you can explain things in their terms. This will make you much more valuable as a technical leader.

And don’t take for granted that you’ll be around to explain your solutions. Sometimes, you’ll never see the person implementing your solution, but that email you sent three weeks ago will be. Work on your writing skills. Pick up a copy of Steven Pinker’s The Sense of Style and read up on persuasive writing. Start a blog and write a few articles on what your coding philosophies are.

The same principle extends to your code. If code is really hard to read, it’s usually not a sign that a really smart person wrote it; in fact, it usually means the opposite. Speaker and software engineer Martin Fowler once said, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

Remember: clarity is key. The perception of your intelligence is going to define the reality of your work experience, whether you like it or not.

You set the tone

Imagine going to the doctor to explain some weird symptoms you’re having. You sit down on the examination bed, a bit nervous and a bit confused as to what’s actually going on. As you explain your condition, the doctor listens with widening eyes and shaking hands. And the more you explain, the worse it gets. This doctor is freaking out. When you finally finish, the doctor stammers, “I don’t know how to handle that!”

How would you feel? What would you do? If it were me, I’d start saying goodbye to loved ones, because that’s a bad, bad sign. I’d be in a full-blown panic based on the doctor’s reaction.

Now imagine a project manager comes to you and starts explaining the weird functionality needed for a particularly tricky project. As you listen, it becomes clear that this is completely new territory for you, as well as for the company. You’re not even sure if what they’re asking is possible.

How do you respond? Are you going to be the crazy doctor above? If you are, I can assure you the project manager will be just as scared as you are, if not more so.

I’m not saying you should lie and make something up, because that’s even worse. But learning to say “I don’t know” without a hint of panic in your voice is an art that will calm down project teams, clients, supervisors, and anyone else involved in a project. (Hint: it usually involves immediately following up with, “but I’ll check it out.”)

As a technical leader, people will follow your emotional lead as well as your technical lead. They’ll look to you not only for the answers, but for the appropriate level of concern. If people leave meetings with you more worried than they were before, it’s probably time to take a look at how your reactions are influencing them.

Real technical leadership

Technical leadership is just as people-centric as other types of leadership, and knowing how your actions impact others can make all the difference in the world in moving from technical expert to technical leader. Remember: getting people to follow your lead can be even more important than knowing how to solve technical problems. Ignoring people can be career suicide for a technical leader—influencing them is where magic really happens.

 

News stories from Monday 27 June, 2016

Favicon for A List Apart: The Full Feed 07:01 This week's sponsor: Skillshare » Post from A List Apart: The Full Feed Visit off-site link

​SKILLSHARE. Explore 1000’s of online classes in design, business, and more! Get 3 months of unlimited access for $0.99.

News stories from Tuesday 21 June, 2016

Favicon for A List Apart: The Full Feed 16:00 The Future of the Web » Post from A List Apart: The Full Feed Visit off-site link

Recently the web—via Twitter—erupted in short-form statements that soon made it clear that buttons had been pushed, sides taken, and feelings felt. How many feels? All the feels. Some rash words may have been said.

But that’s Twitter for you.

It began somewhat innocuously off-Twitter, with a very reasonable X-Men-themed post by Brian Kardell (one of the authors of the Extensible Web Manifesto). Brian suggests that the way forward is by opening up (via JavaScript) some low-level features that have traditionally been welded shut in the browser. This gives web developers and designers—authors, in the parlance of web standards—the ability to prototype future native browser features (for example, by creating custom elements).

If you’ve been following all the talk about web components and the shadow DOM of late, this will sound familiar. The idea is to make standards-making a more rapid, iterative, bottom-up process; if authors have the tools to prototype their own solutions or features (poly- and prolly-fills), then the best of these solutions will ultimately rise to the top and make their way into the native browser environments.

This sounds empowering, collaborative—very much in the spirit of the web.

And, in fact, everything seemed well on the World Wide Web until this string of tweets by Alex Russell, and then this other string of tweets. At which point everyone on the web sort of went bananas.

Doomsday scenarios were proclaimed; shadowy plots implied; curt, sweeping ideological statements made. In short, it was the kind of shit-show you might expect from a touchy, nuanced subject being introduced on Twitter.

But why is it even touchy? Doesn’t it just sound kind of great?

Oh wait JavaScript

Whenever you talk about JavaScript as anything other than an optional interaction layer, folks seem to gather into two big groups.

On the Extensible Web side, we can see the people who think JavaScript is the way forward for the web. And there’s some historical precedent for that. When Brendan Eich created JavaScript, he was aware that he was putting it all together in a hurry, and that he would get things wrong. He wanted JavaScript to be the escape hatch by which others could improve his work (and fix what he got wrong). Taken one step further, JavaScript gives us the ability to extend the web beyond where it currently is. And that, really, is what the Extensible Web Manifesto folks are looking to do.

The web needs to compete with native apps, they assert. And until we get what we need natively in the browser, we can fake it with JavaScript. Much of this approach is encapsulated in the idea of progressive web apps (offline access, tab access, file system access, a spot on the home screen)—giving the web, as Alex Russell puts it, a fair fight.

On the other side of things, in the progressive enhancement camp, we get folks that are worried these approaches will leave some users in the dust. This is epitomized by the “what about users with no JavaScript” argument. This polarizing question—though not the entire issue by far—gets at the heart of the disagreement.

For the Extensible Web folks, it feels like we’re holding the whole web back for a tiny minority of users. For the Progressive Enhancement folks, it’s akin to throwing out accessibility—cruelly denying access to a subset of (quite possibly disadvantaged) users.


During all this hubbub, Jeremy Keith, one of the most prominent torchbearers for progressive enhancement, reminded us that nothing is absolute. He suggests that—as always—the answer is “it depends.” Now this should be pretty obvious to anyone who’s spent a few minutes in the real world doing just about anything. And yet, at the drop of a tweet, we all seem to forget it.

So if we can all take a breath and rein in our feelings for a second, how might we better frame this whole concept of moving the web forward? Because from where I’m sitting, we’re all actually on the same side.

History and repetition

To better understand the bigger picture about the future of the web, it’s useful (as usual) to look back at its past. Since the very beginning of the web, there have been disagreements about how best to proceed. Marc Andreessen and Tim Berners-Lee famously disagreed about the IMG tag. Tim didn’t get his way, Marc implemented IMG in Mosaic as he saw fit, and we all know how things spun out from there. It wasn’t perfect, but a choice had to be made and it did the job. History suggests that IMG did its job fairly well.

A pattern of hacking our way to the better solution becomes evident when you follow the trajectory of the web’s development.

In the 1990’s, webmasters and designers wanted layout like they were used to in print. They wanted columns, dammit. David Siegel formalized the whole tables-and-spacer-GIFs approach in his wildly popular book Creating Killer Web Sites. And thus, the web was flooded with both design innovation and loads of un-semantic markup. Which we now know is bad. But those were the tools that were available, and they allowed us to express our needs at the time. Life, as they say…finds a way.

And when CSS layout came along, guess what it used as a model for the kinds of layout techniques we needed? That’s right: tables.

While we’re at it, how about Flash? As with tables, I’m imagining resounding “boos” from the audience. “Boo, Flash!” But if Flash was so terrible, why did we end up with a web full of Flash sites? I’ll tell you why: video, audio, animation, and cross-browser consistency.

In 1999? Damn straight I want a Flash site. Once authors got their hands on a tool that let them do all those incredible things, they brought the world of web design into a new era of innovation and experimentation.

But again with the lack of semantics, linkability, and interoperability. And while we were at it, with the tossing out of an open, copyright-free platform. Whoops.

It wasn’t long, though, before the native web had to sit up and take notice. Largely because of what authors expressed through Flash, we ended up with things like HTML5, Ajax, SVGs, and CSS3 animations. We knew the outcomes we wanted, and the web just needed to evolve to give us a better solution than Flash.

In short: to get where we need to go, we have to do it wrong first.

Making it up as we go along

We authors express our needs with the tools available to help model what we really need at that moment. Best practices and healthy debate are a part of that. But please, don’t let the sort of emotions we attach to politics and religion stop you from moving forward, however messily. Talk about it? Yes. But at a certain point we all need to shut our traps and go build some stuff. Build it the way you think it should be built. And if it’s good—really good—everyone will see your point.

If I said to you, “I want you to become a really great developer—but you’re not allowed to be a bad developer first,” you’d say I was crazy. So why would we say the same thing about building the web?

We need to try building things. Probably, at first, bad things. But the lessons learned while building those “bad” projects point the way to the better version that comes next. Together we can shuffle toward a better way, taking steps forward, back, and sometimes sideways. But history tells us that we do get there.

The web is a mess. It is, like its creators, imperfect. It’s the most human of mediums. And that messiness, that fluidly shifting imperfection, is why it’s survived this long. It makes it adaptable to our quickly-shifting times.

As we try to extend the web, we may move backward at the same time. And that’s OK. That imperfect sort of progress is how the web ever got anywhere at all. And it’s how it will get where we’re headed next.

Context is everything

One thing that needs to be considered when we’re experimenting (and building things that will likely be kind of bad) is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? The file sizes inherent in the product pretty much exclude slow networks already, so maybe that condition can go out the window there, too.

Context, as usual, is everything. There needs to be realistic assessment of the risk of exclusion against the potential gains of trying new technologies and approaches. We’re already doing this, anyway. Show me a perfectly progressively enhanced, perfectly accessible, perfectly performant project and I’ll show you a company that never ships. We do our best within the constraints we have. We weigh potential risks and benefits. And then we build stuff and assess how well it went; we learn and improve.

When a new approach we’re trying might have aspects that are harmful to some users, it’s good to raise a red flag. So when we see issues with one another’s approaches, let’s talk about how we can fix those problems without throwing out the progress that’s been made. Let’s see how we can bring greater experiences to the web without leaving users in the dust.

If we can continue to work together and consciously balance these dual impulses—pushing the boundaries of the web while keeping it open and accessible to everyone—we’ll know we’re on the right track, even if it’s sometimes a circuitous or befuddling one. Even if sometimes it’s kind of bad. Because that’s the only way I know to get to good.

News stories from Friday 17 June, 2016

Favicon for A List Apart: The Full Feed 18:30 Help One of Our Own: Carolyn Wood » Post from A List Apart: The Full Feed Visit off-site link

One of the nicest people we’ve ever known and worked with is in a desperate fight to survive. Many of you remember her—she is a gifted, passionate, and tireless worker who has never sought the spotlight and has never asked anything for herself.

Carolyn Wood spent three brilliant years at A List Apart, creating the position of acquisitions editor and bringing in articles that most of us in the web industry consider essential reading—not to mention more than 100 others that are equally vital to what we do today. Writers loved her. Since 1999, she has also worked on great web projects like DigitalWeb, The Manual, and Codex: The Journal of Typography.

Think about it. What would the web look like if she hadn’t been a force behind articles like these:

Three years ago, Carolyn was confined to a wheelchair. Then it got worse. From the YouCaring page:

This April, after a week-long illness, she developed acute injuries to the tendons in her feet and the nerves in her right hand and arm. She couldn’t get out of her wheelchair, even to go to the bathroom. At the hospital, they discovered Carolyn had acute kidney failure. After a month in a hospital and a care facility she has bounced back from the kidney failure, but she cannot take painkillers to help her hands and feet.

Carolyn cannot stand or walk or dress herself or take a shower. She is dependent on a lift, manned by two people, to transfer her. Without it she cannot leave her bed.

She’s now warehoused in a home that does not provide therapy—and her insurance does not cover the cost. Her bills are skyrocketing. (She even pays rent on her bed for $200 a month!)

Perhaps worst of all—yes, this gets worse—is that her husband has leukemia. He’s dealing with his own intense pain and fatigue and side effects from twice-monthly infusions. They are each other’s only support, and have been living apart since April. They have no income other than his disability, and are burning through their life savings.

This is absolutely a crisis situation. We’re pulling the community together to help Carolyn—doing anything we possibly can. Her bills are truly staggering. She has no way to cover basic life expenses, much less raise the huge sums required to get the physical and occupational therapy she needs to be independent again.

Please help by donating anything you can, and by sharing Carolyn’s support page with anyone in your network who is compassionate and will listen.

 

News stories from Thursday 16 June, 2016

Favicon for A List Apart: The Full Feed 16:58 This week's sponsor: Bitbucket » Post from A List Apart: The Full Feed Visit off-site link

BITBUCKET: Over 450,000 teams and 3 million developers love Bitbucket - it’s built for teams! Try it free.

News stories from Tuesday 14 June, 2016

Favicon for A List Apart: The Full Feed 16:00 Promoting a Design System Across Your Products » Post from A List Apart: The Full Feed Visit off-site link

The scene: day one of a consulting gig with a new client to build a design and code library for a web app. As luck would have it, the client invited me to sit in on a summit of 25 design leaders from across their enterprise planning across platforms and lines of business. The company had just exploded from 30 to over 100 designers. Hundreds more were coming. Divergent product design was everywhere. They dug in to align efforts.

From a corner, I listened quietly. I was the new guy, minding my own business, comfortable with my well-defined task and soaking up strategy. Then, after lunch, the VP of Digital Design pulled me into an empty conference room.

“Can you refresh me on your scope?” she asked. So I drew an account hub on the whiteboard.

Diagram showing an account hub

“See, the thing is…” she responded, standing up and taking my pen. “We’re redesigning our web marketing homepage now.” She added a circle. “We’re also reinventing online account setup.” Another circle, then arrows connecting the three areas. “We’ve just launched some iOS apps, and more—plus Android—are coming.” She added more circles, arrows, more circles.

Diagram showing an interconnected enterprise ecosystem: marketing, account setup, account hub, plus iOS apps

“I want it all cohesive. Everything.” She drew a circle around the entire ecosystem. “Our design system should cover all of this. You can do that, right?”

A long pause, then a deep breath. Our design system—the parts focused on, the people involved, the products reached—had just grown way more complicated.

Our industry is getting really good at surfacing reusable parts in a living style guide: visual language like color and typography, components like buttons and forms, sophisticated layouts, editorial voice and tone, and so on. We’ve also awoken to the challenges of balancing the centralized and federated influence of the people involved. But there’s a third consideration: identifying and prioritizing the market of products our enterprise creates that our system will reach.

As a systems team, we need to ask: what products will use our system and how will we involve them?

Produce a product inventory

While some enterprises may have an authoritative and up-to-date master list of products, I’ve yet to work with one. There’s usually no more than a loose appreciation of a constantly evolving product portfolio.

Start with a simple product list

A simple list is easy enough. Any whiteboard or text file will do. Produce the list quickly by freelisting as many products as you can think of with teammates involved in starting the system. List actual products (“Investor Relations” and “Careers”), not types of products (such as “Corporate Subsites”).

Some simple product lists
Large Corporate Web Site Small Product Company Large Enterprise
5–15 products 10–25 products 20–100 products
  • Homepage
  • Products
  • Support
  • About
  • Careers
  • Web marketing site
  • Web support site
  • Web corporate site
  • Community site 1
  • Community site 2
  • Web app basic
  • Web app premium
  • Web app 3
  • Web app 4
  • Windows flagship client
  • Windows app 2
  • Web home
  • Web product pages
  • Web product search
  • Web checkout
  • Web support
  • Web rewards program
  • iOS apps (10+)
  • Android apps (10+)
  • Web account mgmt (5+)
  • Web apps (10+)

Note that because every enterprise is unique, the longer the lists get, the more specific they become.

For broader portfolios, gather more details

If your portfolio is more extensive, you’ll need more deliberate planning and coordination of teams spanning an organization. This calls for a more structured, detailed inventory. It’s spreadsheet time, with products as rows and columns for the following:

  • Name, such as Gmail
  • Type / platform: web site, web app, iOS, Android, kiosk, etc.
  • Product owner, if that person even exists
  • Description (optional)
  • People (optional), like a product manager, lead designer or developer, or others involved in the product
  • Other metadata (optional): line of business, last redesigned, upcoming redesign, tech platform, etc.
Screenshot showing a detailed product inventoryA detailed product inventory.

Creating such an inventory can feel draining for a designer. Some modern digital organizations struggle to fill out an inventory like this. I’m talking deer-in-headlights kind of struggling. Completely locked up. Can’t do it. But consider life without it: if you don’t know the possible players, you may set yourself up for failure, or at least a slower road to success. Therefore, take the time to understand the landscape, because the next step is choosing the right products to work with.

Prioritize products into tiers

A system effort is never equally influenced by every product it serves. Instead, the system must know which products matter—and which don’t—and then varyingly engage each in the effort. You can quickly gather input on product priorities from your systems team and/or leaders using techniques like cumulative voting.

Your objective is to classify products into tiers, such as Flagship (the few, essential core products), Secondary (additional influential products), and The Rest to orient strategy and clarify objectives.

1—Organize around flagships

Flagship products are the limited number of core products that a system team deeply and regularly engages with. These products reflect a business’ core essence and values, and their adoption of a system signals the system’s legitimacy.

Getting flagship products to participate is essential, but challenging. Each usually has a lot of individual power and operates autonomously. Getting flagships to share and realize a cohesive objective requires effort.

Choose flagships that’ll commit to you, too

When naming flagships, you must believe they’ll play nice and deliver using the system. Expect to work to align flagships: they can be established, complicated, and well aware of their flagship status. Nevertheless, if all flagships deliver using the system, the system is an unassailable standard. If any avoid or obstruct the system, the system lacks legitimacy.

Takeaway: obtain firm commitments, such as “We will ship with the system by such and such a date” or “Our product MVP must use this design system.” A looser “Yes, we’ll probably adopt what we can” lacks specificity and fidelity.

Latch onto a milestone, or make your own

Flagship commitment can surface as a part of a massive redesign, corporate rebranding, or executive decree. Those are easy events to organize around. Without one, you’ll need to work harder bottom-up to align product managers individually.

Takeaway: establish a reasonable adoption milestone you can broadcast, after which all flagships have shipped with the system.

Choose wisely (between three and five)

For a system to succeed, flagships must ship with it. So choose just enough. One flagship makes the system’s goals indistinguishable from its own self-interest. Two products don’t offer enough variety of voices and contexts to matter. Forming a foundation with six or more “equally influential voices” can become chaotic.

Takeaway: three flagships is the magic minimum, offering sufficient range and incorporating an influential and sometimes decisive third perspective. Allowing for four or five flagships is feasible but will test a group’s ability to work together fluidly.

A system for many must be designed by many

Enterprises place top talent on flagship products. It would be naive to think that your best and brightest will absorb a system that they don’t influence or create themselves. It’s a team game, and getting all-stars working well together is part of your challenge.

Takeaway: integrate flagship designers from the beginning, as you design the system, to inject the right blend of individual styles and shared beliefs.

2—Blend in a secondary set

More products—a secondary set— are also important to a system’s success. Such products may not be flagships because they are between major releases (making adoption difficult), not under active development, or even just slightly less valuable.

Include secondary products in reference designs

Early systems efforts can explore concept mockups—also known as reference designs—to assess a new visual language across many products. Reference designs reveal an emerging direction and serve as “before and after” roadshow material.

Takeaway: include secondary products in early design concepts to acknowledge the value of those products, align the system with their needs, and invite their teams to adopt the system early.

Welcome participation (but moderate contribution)

Systems benefit from an inclusive environment, so bias behaviors toward welcoming input. Encourage divergent ideas, but know that it’s simply not practical to give everyone a voice in everything. Jon Wiley, an early core contributor to Google’s Material Design, shared some wisdom with me during a conversation: “The more a secondary product’s designer participated and injected value, the more latitude they got to interpret and extend the system for their context.”

Takeaway: be open to—but carefully moderate—the involvement of designers on secondary products.

3—Serve the rest at a greater distance

The bigger the enterprise, the longer and more heterogeneous the long tail of other products that could ultimately adopt the system. A system’s success is all about how you define and message it. For example, adopting the core visual style might be expected, but perhaps rigorous navigational integration and ironclad component consistency aren’t goals.

Documentation may be your primary—or only—channel to communicate how to use the system. Beyond that, your budding system team may not have the time for face-to-face meetings or lengthy discussions.

Takeaway: early on, limit focus on and engagement with remaining products. As a system matures, gradually invest in lightweight support activities like getting-started sessions, audits, and triaging office-hour clinics.

Adjust approach depending on context

Every product portfolio is different, and thus so is every design system. Let’s consider the themes and dynamics from some archetypal contexts we face repeatedly in our work.

Example 1: large corporate website, made of “properties”

You know: the homepage-as-gateway-to-products hegemon (owned by Marketing) integrated with Training, Services, and About Us content (owned by less powerful fiefdoms) straddling a vast ocean of transactional features like Support/Account Management and Communities. All of these “properties” have drifted apart, and some trigger—the decision to go responsive, a rebranding, or an annoyed-enough-to-care executive—dictates that it’s “time to unify!”

Diagram showing a typical web marketing sitemap overlaid with a product section team’s choices on spreading a system beyond its own sectionTypical web marketing sitemap, overlaid with a product section team’s choices on spreading a system beyond its own section.

The get? Support

System influence usually radiates from Marketing and Brand through to selling Products. But Support is where customers spend most of their time: billing, admin, downloading, troubleshooting. Support’s features are complicated, with intricate UI and longer release cycles across multiple platforms. It may be the most difficult section to integrate , but it’s essential.

Takeaway: if your gets—in this case Home, Products, and Support—deliver, you win. Everyone else will either follow or look bad. That’s your flagship set.

Minimize homepage distraction

Achieving cohesive design is about suffusing an entire experience with it. Yet a homepage is often the part of a site that is most exposed to, and justifiably distinct from, otherwise reusable componentry. It has tons of cooks, unique and often complex parts, and changes frequently. Such qualities— indecisiveness, complexity, and instability—corrode systems efforts.

Takeaway: don’t fall prey to the homepage distraction. Focus on stable fundamentals that you can confidently spread.

Exploit navigational change to integrate a system hook

As branding or navigation changes, so does a header. It appears everywhere, and changes to it can be propagated centrally. Get those properties—particularly those lacking full-time design support—to sync with a shared navigation service, and use that hook to open access to the greater goodies your system has to offer.

Takeaway: exploit the connection! Adopters may not embrace all your parts, but since you are injecting your code into their environment, they could.

Example 2: a modest product portfolio

A smaller company’s strategic shifts can be chaotic, lending themselves to an unstable environment in which to apply a system. Nevertheless, a smaller community of designers—often a community of practice dispersed across a portfolio—can provide an opportunity to be more cohesive.

Radiate influence from web apps

Many small companies assemble portfolios of websites, web apps, and their iOS, Android, and Windows counterparts. Websites and native apps share little beyond visual style and editorial tone. However, web apps provide a pivot: they can share a far deeper overlap of components and tooling with websites, and their experiences often mirror what’s found on native apps.

Takeaway: look for important products whose interests overlap many other products, and radiate influence from there.

Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.

Demo value across the whole journey

A small company’s flagship products should be the backbone of a customer’s journey, from reach and acquisition through service and loyalty. Design activities that express the system’s value from the broader user journey tend to reveal gaps, identify clunky handoffs, and trigger real discussions around cohesiveness.

Takeaway: evoke system aspirations by creating before/after concepts and demoing cohesiveness across the journey, such as with a stitched prototype.

A series of screenshots of the Marriott.com project showing how disparate design artifacts across products were stitched together into an interactive prototypeFor Marriott.com, disparate design artifacts across products (left) were stitched together into an interactive, interconnected prototype (right).

Bridge collaboration beyond digital

Because of their areas of focus, “non-digital” designers (working on products like trade-show booths, print, TV, and retail) tend to be less savvy than their digital counterparts when it comes to interaction. Nonetheless, you’ll share the essence of your visual language with them, such as making sure the system’s primary button doesn’t run afoul of the brand’s blue, and yet provides sufficient contrast for accessibility.

Takeaway: encourage non-digital designers to do digital things. Be patient and inclusive, even if their concerns sometimes drift away from what you care about most.

Example 3: a massive multiplatform enterprise

For an enterprise as huge as Google, prioritizing apps was essential to Material Design’s success. The Verge’s “Redesigning Google: How Larry Page Engineered a Beautiful Revolution” suggests strong prioritization, with Search, Maps, Gmail, and later Android central to the emerging system. Not as much in the conversation, perhaps early on? Docs, Drive, Books, Finance. Definitely not SantaTracker.

Broaden representation across platforms & businesses

With coverage across a far broader swath of products, ensure flagship product selection spans a few platforms and lines of business. If you want it to apply everywhere, then the system—how it’s designed, developed, and maintained—will benefit from diverse influences.

Takeaway: Strive for diverse system contribution and participation in a manner consistent with the products it serves.

Mix doers & delegators

Massive enterprise systems trigger influence from many visionaries. Yet you can’t rely on senior directors to produce meticulous, thoughtful concepts. Such leaders already direct and manage work across many products. Save them from themselves! Work with them to identify design talent with pockets of time. Even better, ask them to lend a doer they recommend for a month- or weeklong burst.

Takeaway: defer to creative leaders on strategy, but redirect their instincts from doing everything to identifying and providing talent.

Right the fundamentals before digging deep

I confess that in the past, I’ve brought a too-lofty ambition to bear on quickly building huge libraries for organizations of many, many designers. Months later, I wondered why our team was still refining the “big three” (color, typography, and iconography) or the “big five” (the big three, plus buttons and forms). Um, what? Given the system’s broad reach, I had to adjust my expectations to be satisfied with what was still a very consequential shift toward cohesiveness.

Takeaway: balance ambition for depth with spreading fundamentals wide across a large enterprise, so that everyone shares a core visual language.

The long game

Approach a design system as you would a marathon, not a sprint. You’re laying the groundwork for an extensive effort. By understanding your organization through its product portfolio, you’ll strengthen a cornerstone—the design system—that will help you achieve a stronger and more cohesive experience.

News stories from Friday 10 June, 2016

Favicon for Kopozky 19:06 Not Sunny! » Post from Kopozky Visit off-site link

Comic strip: “Not Sunny!”

Starring: The Admin


News stories from Tuesday 07 June, 2016

Favicon for A List Apart: The Full Feed 16:00 Making your JavaScript Pure » Post from A List Apart: The Full Feed Visit off-site link

Once your website or application goes past a small number of lines, it will inevitably contain bugs of some sort. This isn’t specific to JavaScript but is shared by nearly all languages—it’s very tricky, if not impossible, to thoroughly rule out the chance of any bugs in your application. However, that doesn’t mean we can’t take precautions by coding in a way that lessens our vulnerability to bugs.

Pure and impure functions

A pure function is defined as one that doesn’t depend on or modify variables outside of its scope. That’s a bit of a mouthful, so let’s dive into some code for a more practical example.

Take this function that calculates whether a user’s mouse is on the left-hand side of a page, and logs true if it is and false otherwise. In reality your function would probably be more complex and do more work, but this example does a great job of demonstrating:


function mouseOnLeftSide(mouseX) {
    return mouseX 

mouseOnLeftSide() takes an X coordinate and checks to see if it’s less than half the window width—which would place it on the left side. However, mouseOnLeftSide() is not a pure function. We know this because within the body of the function, it refers to a value that it wasn’t explicitly given:


return mouseX 

The function is given mouseX, but not window.innerWidth. This means the function is reaching out to access data it wasn’t given, and hence it’s not pure.

The problem with impure functions

You might ask why this is an issue—this piece of code works just fine and does the job expected of it. Imagine that you get a bug report from a user that when the window is less than 500 pixels wide the function is incorrect. How do you test this? You’ve got two options:

  • You could manually test by loading up your browser and moving your mouse around until you’ve found the problem.
  • You could write some unit tests (Rebecca Murphey’s Writing Testable JavaScript is a great introduction) to not only track down the bug, but also ensure that it doesn’t happen again.

Keen to have a test in place to avoid this bug recurring, we pick the second option and get writing. Now we face a new problem, though: how do we set up our test correctly? We know we need to set up our test with the window width set to less than 500 pixels, but how? The function relies on window.innerWidth, and making sure that’s at a particular value is going to be a pain.

Benefits of pure functions

Simpler testing

With that issue of how to test in mind, imagine we’d instead written the code like so:


function mouseOnLeftSide(mouseX, windowWidth) {
    return mouseX 

The key difference here is that mouseOnLeftSide() now takes two arguments: the mouse X position and the window width. This means that mouseOnLeftSide() is now a pure function; all the data it needs it is explicitly given as inputs and it never has to reach out to access any data.

In terms of functionality, it’s identical to our previous example, but we’ve dramatically improved its maintainability and testability. Now we don’t have to hack around to fake window.innerWidth for any tests, but instead just call mouseOnLeftSide() with the exact arguments we need:


mouseOnLeftSide(5, 499) // ensure it works with width 

Self-documenting

Besides being easier to test, pure functions have other characteristics that make them worth using whenever possible. By their very nature, pure functions are self-documenting. If you know that a function doesn’t reach out of its scope to get data, you know the only data it can possibly touch is passed in as arguments. Consider the following function definition:


function mouseOnLeftSide(mouseX, windowWidth)

You know that this function deals with two pieces of data, and if the arguments are well named it should be clear what they are. We all have to deal with the pain of revisiting code that’s lain untouched for six months, and being able to regain familiarity with it quickly is a key skill.

Avoiding globals in functions

The problem of global variables is well documented in JavaScript—the language makes it trivial to store data globally where all functions can access it. This is a common source of bugs, too, because anything could have changed the value of a global variable, and hence the function could now behave differently.

An additional property of pure functions is referential transparency. This is a rather complex term with a simple meaning: given the same inputs, the output is always the same. Going back to mouseOnLeftSide, let’s look at the first definition we had:


function mouseOnLeftSide(mouseX) {
    return mouseX 

This function is not referentially transparent. I could call it with the input 5 multiple times, resize the window between calls, and the result would be different every time. This is a slightly contrived example, but functions that return different values even when their inputs are the same are always harder to work with. Reasoning about them is harder because you can’t guarantee their behavior. For the same reason, testing is trickier, because you don’t have full control over the data the function needs.

On the other hand, our improved mouseOnLeftSide function is referentially transparent because all its data comes from inputs and it never reaches outside itself:


function mouseOnLeftSide(mouseX, windowWidth) {
    return mouseX 

You get referential transparency for free when following the rule of declaring all your data as inputs, and by doing this you eliminate an entire class of bugs around side effects and functions acting unexpectedly. If you have full control over the data, you can hunt down and replicate bugs much more quickly and reliably without chancing the lottery of global variables that could interfere.

Choosing which functions to make pure

It’s impossible to have pure functions consistently—there will always be a time when you need to reach out and fetch data, the most common example of which is reaching into the DOM to grab a specific element to interact with. It’s a fact of JavaScript that you’ll have to do this, and you shouldn’t feel bad about reaching outside of your function. Instead, carefully consider if there is a way to structure your code so that impure functions can be isolated. Prevent them from having broad effects throughout your codebase, and try to use pure functions whenever appropriate.

Let’s take a look at the code below, which grabs an element from the DOM and changes its background color to red:


function changeElementToRed() {
    var foo = document.getElementById('foo');
    foo.style.backgroundColor = "red";
}

changeElementToRed();

There are two problems with this piece of code, both solvable by transitioning to a pure function:

  1. This function is not reusable at all; it’s directly tied to a specific DOM element. If we wanted to reuse it to change a different element, we couldn’t.
  2. This function is hard to test because it’s not pure. To test it, we would have to create an element with a specific ID rather than any generic element.

Given the two points above, I would rewrite this function to:


function changeElementToRed(elem) {
    elem.style.backgroundColor = "red";
}

function changeFooToRed() {
    var foo = document.getElementById('foo');
    changeElementToRed(foo);
}

changeFooToRed();

We’ve now changed changeElementToRed() to not be tied to a specific DOM element and to be more generic. At the same time, we’ve made it pure, bringing us all the benefits discussed previously.

It’s important to note, though, that I’ve still got some impure code—changeFooToRed() is impure. You can never avoid this, but it’s about spotting opportunities where turning a function pure would increase its readability, reusability, and testability. By keeping the places where you’re impure to a minimum and creating as many pure, reusable functions as you can, you’ll save yourself a huge amount of pain in the future and write better code.

Conclusion

“Pure functions,” “side effects,” and “referential transparency” are terms usually associated with purely functional languages, but that doesn’t mean we can’t take the principles and apply them to our JavaScript, too. By being mindful of these principles and applying them wisely when your code could benefit from them you’ll gain more reliable, self-documenting codebases that are easier to work with and that break less often. I encourage you to keep this in mind next time you’re writing new code, or even revisiting some existing code. It will take some time to get used to these ideas, but soon you’ll find yourself applying them without even thinking about it. Your fellow developers and your future self will thank you.

News stories from Monday 06 June, 2016

Favicon for A List Apart: The Full Feed 06:00 This week's sponsor: FULLSTORY » Post from A List Apart: The Full Feed Visit off-site link

FullStory, a pixel-perfect session playback tool that captures everything about your customer experience with one easy-to-install script.

News stories from Wednesday 01 June, 2016

Favicon for A List Apart: The Full Feed 16:00 Commit to Contribute » Post from A List Apart: The Full Feed Visit off-site link

One morning I found a little time to work on nodemon and saw a new pull request that fixed a small bug. The only problem with the pull request was that it didn’t have tests and didn’t follow the contributing guidelines, which results in the automated deploy not running.

The contributor was obviously extremely new to Git and GitHub and just the small change was well out of their comfort zone, so when I asked for the changes to adhere to the way the project works, it all kind of fell apart.

How do I change this? How do I make it easier and more welcoming for outside developers to contribute? How do I make sure contributors don’t feel like they’re being asked to do more than necessary?

This last point is important.

The real cost of a one-line change

Many times in my own code, I’ve made a single-line change that could be a matter of a few characters, and this alone fixes an issue. Except that’s never enough. (In fact, there’s usually a correlation between the maturity and/or age of the project and the amount of additional work to complete the change due to the growing complexity of systems over time.)

A recent issue in my Snyk work was fixed with this single line change:

lines of code

In this particular example, I had solved the problem in my head very quickly and realized that this was the fix. Except that I had to then write the test to support the change, not only to prove that it works but to prevent regression in the future.

My projects (and Snyk’s) all use semantic release to automate releases by commit message. In this particular case, I had to bump the dependencies in the Snyk command line and then commit that with the right message format to ensure a release would inherit the fix.

All in all, the one-line fix turned into this: one line, one new test, tested across four versions of node, bump dependencies in a secondary project, ensure commit messages were right, and then wait for the secondary project’s tests to all pass before it was automatically published.

Put simply: it’s never just a one-line fix.

Helping those first pull requests

Doing a pull request (PR) into another project can be pretty daunting. I’ve got a fair amount of experience and even I’ve started and aborted pull requests because I found the chain of events leading up to a complete PR too complex.

So how can I change my projects and GitHub repositories to be more welcoming to new contributors and, most important, how can I make that first PR easy and safe?

Issue and pull request templates

GitHub recently announced support for issue and PR templates. These are a great start because now I can specifically ask for items to be checked off, or information to be filled out to help diagnose issues.

Here’s what the PR template looks like for Snyk’s command line interface (CLI) :


- [ ] Ready for review
- [ ] Follows CONTRIBUTING rules
- [ ] Reviewed by @remy (Snyk internal team)

 #### What does this PR do?
 #### Where should the reviewer start?
 #### How should this be manually tested?
 #### Any background context you want to provide?
 #### What are the relevant tickets?
 #### Screenshots
 #### Additional questions
 

This is partly based on QuickLeft’s PR template. These items are not hard prerequisites on the actual PR, but it does help in getting full information. I’m slowly adding these to all my repos.

In addition, having a CONTRIBUTING.md file in the root of the repo (or in .github) means new issues and PRs include the notice in the header:

GitHub contributing notice

Automated checks

For context: semantic release will read the commits in a push to master, and if there’s a feat: commit, it’ll do a minor version bump. If there’s a fix: it’ll do a patch version bump. If the text BREAKING CHANGE: appears in the body of a commit, it’ll do a major version bump.

I’ve been using semantic release in all of my projects. As long as the commit message format is right, there’s no work involved in creating a release, and no work in deciding what the version is going to be.

Something that none of my repos historically had was the ability to validate contributed commits for formatting. In reality, semantic release doesn’t mind if you don’t follow the commit format; they’re simply ignored and don’t drive releases (to npm).

I’ve since come across ghooks, which will run commands on Git hooks, in particular using a commit-msg hook validate-commit-msg. The installation is relatively straightforward, and the feedback to the user is really good because if the commit needs tweaking to follow the commit format, I can include examples and links.

Here’s what it looks like on the command line:

Git commit validation

...and in the GitHub desktop app (for comparison):

Git commit validation

This is work that I can load on myself to make contributing easier, which in turn makes my job easier when it comes to managing and merging contributions into the project. In addition, for my projects, I’m also adding a pre-push hook that runs all the tests before the push to GitHub is allowed. That way if new code has broken the tests, the author is aware.

To see the changes required to get the output above, see this commit in my current tinker project.

There are two further areas worth investigating. The first is the commitizenproject. Second, what I’d really like to see is a GitHub bot that could automatically comment on pull requests to say whether the commits are okay (and if not, direct the contributor on how to fix that problem) and also to show how the PR would affect the release (i.e., whether it would trigger a release, either as a bug patch or a minor version change).

Including example tests

I think this might be the crux of problem: the lack of example tests in any project. A test can be a minefield of challenges, such as these:

  • knowing the test framework
  • knowing the application code
  • knowing about testing methodology (unit tests, integration, something else)
  • replicating the test environment

Another project of mine, inliner, has a disproportionately high rate of PRs that include tests. I put that down to the ease with which users can add tests.

The contributing guide makes it clear that contributing doesn’t even require that you write test code. Authors just create a source HTML file and the expected output, and the test automatically includes the file and checks that the output is as expected.

Adding specific examples of how to write tests will, I believe, lower the barrier of entry. I might link to some sort of sample test in the contributing doc, or create some kind of harness (like inliner does) to make it easy to add input and expected output.

Fixing common mistakes

Something I’ve also come to accept is that developers don’t read contributing docs. It’s okay, we’re all busy, we don’t always have time to pore over documentation. Heck, contributing to open source isn’t easy.

I’m going to start including a short document on how to fix common problems in pull requests. Often it’s amending a commit message or rebasing the commits. This is easy for me to document, and will allow me to point new users to a walkthrough of how to fix their commits.

What’s next?

In truth, most of these items are straightforward and not much work to implement. Sure, I wouldn’t drop everything I’m doing and add them to all my projects at once, but certainly I’d include them in each active project as I work on it.

  1. Add issue and pull request templates.
  2. Add ghooks and validate-commit-msg with standard language (most if not all of my projects are node-based).
  3. Either make adding a test super easy, or at least include sample tests (for unit testing and potentially for integration testing).
  4. Add a contributing document that includes notes about commit format, tests, and anything that can make the contributing process smoother.

Finally, I (and we) always need to keep in mind that when someone has taken time out of their day to contribute code to our projects—whatever the state of the pull request—it’s a big deal.

It takes commitment to contribute. Let’s show some love for that.

News stories from Tuesday 31 May, 2016

Favicon for A List Apart: The Full Feed 23:51 This week's sponsor: JIRA » Post from A List Apart: The Full Feed Visit off-site link

Thanks to our sponsor Try JIRA for free today.

Favicon for Joel on Software 07:30 Introducing HyperDev » Post from Joel on Software Visit off-site link

One more thing…

It’s been awhile since we launched a whole new product at Fog Creek Software (the last one was Trello, and that’s doing pretty well). Today we’re announcing the public beta of HyperDev, a developer playground for building full-stack web-apps fast.

HyperDev is going to be the fastest way to bang out code and get it running on the internet. We want to eliminate 100% of the complicated administrative details around getting code up and running on a website. The best way to explain that is with a little tour.

Step one. You go to hyperdev.com.

Boom. Your new website is already running. You have your own private virtual machine (well, really it’s a container but you don’t have to care about that or know what that means) running on the internet at its own, custom URL which you can already give people and they can already go to it and see the simple code we started you out with.

All that happened just because you went to hyperdev.com.

Notice what you DIDN’T do.

  • You didn’t make an account.
  • You didn’t use Git. Or any version control, really.
  • You didn’t deal with name servers.
  • You didn’t sign up with a hosting provider.
  • You didn’t provision a server.
  • You didn’t install an operating system or a LAMP stack or Node or operating systems or anything.
  • You didn’t configure the server.
  • You didn’t figure out how to integrate and deploy your code.

You just went to hyperdev.com. Try it now!

What do you see in your browser?

Well, you’re seeing a basic IDE. There’s a little button that says SHOW and when you click on that, another browser window opens up showing you your website as it appears to the world. Notice that we invented a unique name for you.

Over there in the IDE, in the bottom left, you see some client side files. One of them is called index.html. You know what to do, right? Click on index.html and make a couple of changes to the text.

Now here’s something that is already a little bit magic… As you type changes into the IDE, without saving, those changes are deploying to your new web server and we’re refreshing the web browser for you, so those changes are appearing almost instantly, both in your browser and for anyone else on the internet visiting your URL.

Again, notice what you DIDN’T do:

  • You didn’t hit a “save” button.
  • You didn’t commit to Git.
  • You didn’t push.
  • You didn’t run a deployment script.
  • You didn’t restart the web server.
  • You didn’t refresh the page on your web browser.

You just typed some changes and BOOM they appeared.

OK, so far so good. That’s a little bit like jsFiddle or Stack Overflow snippets, right? NBD.

But let’s look around the IDE some more. In the top left, you see some server side files. These are actual code that actually runs on the actual (virtual) server that we’re running for you. It’s running node. If you go into the server.js file you see a bunch of JavaScript. Now change something there, and watch your window over on the right.

Magic again… the changes you are making to the server-side Javascript code are already deployed and they’re already showing up live in the web browser you’re pointing at your URL.

Literally every change you make is instantly saved, uploaded to the server, the server is restarted with the new code, and your browser is refreshed, all within half a second. So now your server-side code changes are instantly deployed, and once again, notice that you didn’t:

  • Save
  • Do Git incantations
  • Deploy
  • Buy and configure a continuous integration solution
  • Restart anything
  • Send any SIGHUPs

You just changed the code and it was already reflected on the live server.

Now you’re starting to get the idea of HyperDev. It’s just a SUPER FAST way to get running code up on the internet without dealing with any administrative headaches that are not related to your code.

Ok, now I think I know the next question you’re going to ask me.

“Wait a minute,” you’re going to ask. “If I’m not using Git, is this a single-developer solution?”

No. There’s an Invite button in the top left. You can use that to get a link that you give your friends. When they go to that link, they’ll be editing, live, with you, in the same documents. It’s a magical kind of team programming where everything shows up instantly, like Trello, or Google Docs. It is a magical thing to collaborate with a team of two or three or four people banging away on different parts of the code at the same time without a source control system. It’s remarkably productive; you can dive in and help each other or you can each work on different parts of the code.

“This doesn’t make sense. How is the code not permanently broken? You can’t just sync all our changes continuously!”

You’d be surprised just how well it does work, for most small teams and most simple programming projects. Listen, this is not the future of all software development. Professional software development teams will continue to use professional, robust tools like Git and that’s great. But it’s surprising how just having continuous merging and reliable Undo solves the “version control” problem for all kinds of simple coding problems. And it really does create an insanely addictive form of collaboration that supercharges your team productivity.

“What if I literally type ‘DELETE * FROM USERS’ on my way to typing ‘WHERE id=9283’, do I lose all my user data?”

Erm… yes. Don’t do that. This doesn’t come up that often, to be honest, and we’re going to add the world’s simplest “branch” feature so that optionally you can have a “dev” and “live” branch, but for now, yeah, you’d be surprised at how well this works in practice even though in theory it sounds terrifying.

“Does it have to be JavaScript?”

Right now the server we gave you is running Node so today it has to be JavaScript. We’ll add other languages soon.

“What can I do with my server?”

Anything you can do in Node. You can add any package you want just by editing package.json. So literally any working JavaScript you want to cut and paste from Stack Overflow is going to work fine.

“Is my server always up?”

If you don’t use it for a while, we’ll put your server to sleep, but it will never take more than a few seconds to restart. But yes for all intents and purposes, you can treat it like a reasonably reliably, 24/7 web server. This is still a beta so don’t ask me how many 9’s. You can have all the 8’s you want.

“Why would I trust my website to you? What if you go out of business?”

There’s nothing special about the container we gave you; it’s a generic VM running Node. There’s nothing special about the way we told you to write code; we do not give you special frameworks or libraries that will lock you in. Download your source code and host it anywhere and you’re back in business.

“How are you going to make money off of this?”

Aaaaaah! why do you care!

But seriously, the current plan is to have a free version for public / open source code you don’t mind sharing with the world. If you want private code, much like private repos, there will eventually be paid plans, and we’ll have corporate and enterprise versions. For now it’s all just a beta so don’t worry too much about that!

“What is the point of this Joel?”

As developers we have fantastic sets of amazing tools for building, creating, managing, testing, and deploying our source code. They’re powerful and can do anything you might need. But they’re usually too complex and too complicated for very simple projects. Useful little bits of code never get written because you dread the administration of setting up a new dev environment, source code repo, and server. New programmers and students are overwhelmed by the complexity of distributed version control when they’re still learning to write a while loop. Apps that might solve real problems never get written because of the friction of getting started.

Our theory here is that HyperDev can remove all the barriers to getting started and building useful things, and more great things will get built.

“What now?”

Really? Just go to HyperDev and start playing!

Need to hire a really great programmer? Want a job that doesn't drive you crazy? Visit the Joel on Software Job Board: Great software jobs, great people.

News stories from Wednesday 25 May, 2016

Favicon for A List Apart: The Full Feed 16:00 Once Upon a Time » Post from A List Apart: The Full Feed Visit off-site link

Once upon a time, I had a coworker named Bob who, when he needed help, would start the conversation in the middle and work to both ends. My phone would ring, and the first thing I heard was: “Hey, so, we need the spreadsheets on Tuesday so that Information Security can have them back to us in time for the estimates.”

Spreadsheets? Estimates? Bob and I had never discussed either. As I had been “discouraged” from responding with “What the hell are you talking about now?” I spent the next 10 minutes of every Bob call trying to tease out the context of his proclamations.

Clearly, Bob needed help—and not just with spreadsheets.

Then there was Susan. When Susan wanted help, she gave me the entire life story of a project in the most polite, professional language possible. An email from Susan might go like this:

Good morning,

I’m working on the Super Bananas project, which we started three weeks ago and have been slowly working on since. We began with persona writing, then did some scenarios, and discussed a survey.

[Insert two more paragraphs of the history of the project]

I’m hoping—if you have the opportunity (due to your previous experience with [insert four of my last projects in chronological order])—you may be able to share a content-inventory template that would be appropriate for this project. If it isn’t too much trouble, when you get a chance, could you forward me the template at your earliest convenience?

Thank you in advance for your cooperation,

Susan

An email that said, “Hey do you have a content-inventory template I could use on the Super Bananas Project?” would have sufficed, but Susan wanted to be professional. She believed that if I had to ask a question, she had failed to communicate properly. And, of course, that failure would weigh heavy on all our heads.

Bob and Susan were as opposite as the tortoise and the hare, but they shared a common problem. Neither could get over the river and through the woods effectively. Specifically, they were both lousy at establishing context and getting to the point.

We all need the help of others to build effective tools and applications. Communication skills are so critical to that endeavor that we’ve seen article after article after article—not to mention books, training classes, and job postings—stressing the importance of communication skills. Without the ability to communicate, we can neither build things right, nor build the right things, for our clients and our users.

Still, context-setting is a tricky skill to learn. Stray too far toward Bob, and no one knows what we’re talking about. Follow Susan’s example, and people get bored and wander off before we get to the point.

Whether we’re asking a colleague for help or nudging an end user to take action, we want them to respond a certain way. And whether we’re writing a radio ad, publishing a blog post, writing an email, or calling a colleague, we have to set the proper level of context to get the result we want.

The most effective technique I’ve found for beginners is a process I call “Once Upon a Time.”

Fairy tales? Seriously?

Fairy tales are one of our oldest forms of folklore, with evidence indicating that they may stretch back to the Roman Empire. The prelude “Once upon a time” dates to 1380 BCE, according to the Oxford English Dictionary. Wikipedia lists over 75 language variations of the stock story opener. It’s safe to say that the vast majority of us, regardless of language or culture, have heard our share of fairy tales, from the 1800s-era Brothers Grimm stories to the 1987 musical Into the Woods.

We know how they go:

Once upon a time, there was a [main character] living in [this situation] who [had this problem]. [Some person] knows of this need and sends the [main character] out to [complete these steps]. They [do things] but it’s really hard because [insert challenges]. They overcome [list of challenges], and everyone lives happily ever after.

Fairy tales are effective oral storytelling techniques precisely because they follow a standard structure that always provides enough context to understand the story. Almost everything we do can be described with this structure.

Once upon a time Anne lacked an ice cream sandwich. This forced her to get off the couch and go to the freezer, where food stayed amazingly cold. She was forced to put her hands in the icy freezer to dig the ice cream sandwich box out of the back. She overcame the cold and was rewarded with a tasty ice cream sandwich! And they all lived happily ever after.

The structure of a fairy tale’s beginning has a lot of similarities to the journalistic Five Ws of basic information gathering: Who? What? When? Where? Why? How?

In our communication construct, we are the main character whose situation and problem need to be succinctly described. We’ve been sent out to do a thing, we’ve hit a challenge, and now we need specific help to overcome the challenge.

How does this help me if I’m a Bob or a Susan?

When Bob wanted to tell his story, he didn’t start with “Once upon a time…” He started halfway through the story. If Bob was Little Red Riding Hood, he would have started by saying, “We need scissors and some rocks.” (Side note: the general lack of knowledge about how surgery works in that particular tale gives me chills.)

When Susan wanted to tell her story, she started before “Once upon a time…” If she was Little Red Riding Hood, she started by telling you how her parents met, how long they dated, and so on, before finally getting around to mentioning that she was trapped in a wolf’s stomach.

When we tell our stories, we have to start at the beginning—not too early, not too late. If we’re Bob, that means making sure we’ve relayed the basic facts: who we are, what our goal is, possibly who sent us, and what our challenge is. If we’re Susan, we need to make sure we limit ourselves to the facts we actually need.

This is where we take the fairy-tale format and put it into the first person. Susan might write:

Once upon a time, the Bananas team asked me to do the content strategy for their project. We made good progress until we had this problem: we don’t have a template for content inventories. Bob suggested I contact you. Do you have a template you can send us?

Bob might say:

Once upon a time, you and I were working on the data mapping of the new Information Security application. Then Information Security asked us to send the mapping to them so they could validate it. This is a problem because we only have until Tuesday to give them the unfinished spreadsheets. Otherwise we’ll hit an even bigger problem: we won’t be able to estimate the project size on Friday without the spreadsheet. Can you help me get the spreadsheet to them on time?

Notice the parallels between the fairy tales and these drafts: we know the main character, their situation, who sent them or triggered their move, and what they need to solve their problem. In Bob’s case, this is much more information than he usually provides. In Susan’s, it’s probably much less. In both cases, we’ve distilled the situation and the request down to the basics. In both cases, the only edit needed is to remove “Once upon a time…” from the first sentence, and it’s ready to go.

But what about…?

Both the Bobs and the Susans I’ve worked with have had questions about this technique, especially since in both cases they thought they were already doing a pretty good job of providing context.

The original Susan had two big concerns that led her to giving out too much information. The first was that she’d sound unprofessional if she didn’t include every last detail and nuance of business etiquette. The second was that if her recipient had questions, they’d consider her amateurish for not providing every bit of information up front.

Susans of the world, let me assure you: clear, concise communication is professional. The message isn’t not to use “please” and “thank you”; it’s that “If it isn’t too much trouble, when you get a chance, could you please consider…” is probably overkill.

Beyond that, no one can anticipate every question another person might have. Clear communication starts a dialogue by covering the basics and inviting questions. It also saves time; you only have to answer the questions your colleague or reader actually have. If you’re not sure whether to keep a piece of information in your story, take it out and see if the tale still makes sense.

Bob was a tougher nut to crack, in part because he frequently didn’t realize he was starting in the middle. Bob was genuinely baffled that colleagues hadn’t read his mind to know what he was talking about. He thought he just needed the answer to one “quick” question. Once he was made aware that he was confusing—and sometimes annoying—coworkers, he could be brought back on track with gentle suggestions. “Okay Bob, let’s start over. Once upon a time you were…?”

Begin at the beginning and stop at the end

Using the age-old format of “Once upon a time…” gives us an incredibly sturdy framework to use for requesting action from people. We provide all of the context they need to understand our request, as well as a clear and concise description of that request.

Clear, concise, contextual communication is professional, efficient, and much less frustrating to everyone involved, so it pays to build good habits, even if the basis of those habits seems a bit corny.

Do you really need to start with “Once upon a time…” to tell a story or communicate a request? Well, it doesn’t hurt. The phrase is really a marker that you’re changing the way you think about your writing, for whom you’re writing it, and what you expect to gain. Soup doesn’t require stones, and business communication doesn’t require “Once upon a time…”

But it does lead to more satisfying endings.

And they all lived happily ever after.

News stories from Monday 23 May, 2016

Favicon for A List Apart: The Full Feed 16:48 This week's sponsor: ​FullStory » Post from A List Apart: The Full Feed Visit off-site link

With our sponsor FULLSTORY, you get a pixel-perfect session playback tool that helps answer any question about your customer’s online experience.​ ​One easy-to-install script captures everything you need.

Favicon for test.ical.ly 09:12 Hallo Welt! » Post from test.ical.ly Visit off-site link

Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Und dann starte mit dem Schreiben!

News stories from Tuesday 17 May, 2016

Favicon for A List Apart: The Full Feed 16:00 The Rich (Typefaces) Get Richer » Post from A List Apart: The Full Feed Visit off-site link

There are over 1,200 font families available on Typekit. Anyone with a Typekit plan can freely use any of those typefaces, and yet we see the same small selection used absolutely everywhere on the web. Ever wonder why?

The same phenomenon happens with other font services like Google Fonts and MyFonts. Google Fonts offers 708 font families, but we can’t browse the web for 15 minutes without encountering Open Sans and Lato. MyFonts has over 20,000 families available as web fonts, yet designers consistently reach for only a narrow selection of those.

On my side project Typewolf, I curate daily examples of nice type in the wild. Here are the ten most popular fonts from 2015:

  1. Futura
  2. Aperçu
  3. Proxima Nova
  4. Gotham
  5. Brown
  6. Avenir
  7. Caslon
  8. Brandon Grotesque
  9. GT Walsheim
  10. Circular

And here are the ten most popular from 2014:

  1. Brandon Grotesque
  2. Futura
  3. Avenir
  4. Aperçu
  5. Proxima Nova
  6. Franklin Gothic
  7. GT Walsheim
  8. Gotham
  9. Circular
  10. Caslon

Notice any similarities? Nine out of the ten fonts from 2014 made the top ten again in 2015. Admittedly, Typewolf is a curated showcase, so there is bound to be some bias in the site selection process. But with 365 sites featured in a year, I think Typewolf is a solid representation of what is popular in the design community.

Other lists of popular fonts show similar results. Or simply look around the web and take a peek at the CSS—Proxima Nova, Futura, and Brandon Grotesque dominate sites today. And these fonts aren’t just a little more popular than other fonts—they are orders of magnitude more popular.

When it comes to typefaces, the rich get richer

I don’t mean to imply that type designers are getting rich like Fortune 500 CEOs and flying around to type conferences in their private Learjets (although some type designers are certainly doing quite well). I’m just pointing out that a tiny percentage of fonts get the lion’s share of usage and that these “chosen few” continue to become even more popular.

The rich get richer phenomenon (also known as the Matthew Effect) refers to something that grows in popularity due to a positive feedback loop. An app that reaches number one in the App Store will receive press because it is number one, which in turn will give it even more downloads and even more press. Popularity breeds popularity. For a cogent book that discusses this topic much more eloquently than I ever could, check out Nicholas Taleb’s The Black Swan.

But back to typefaces.

Designers tend to copy other designers. There’s nothing wrong with that—designers should certainly try to build upon the best practices of others. And they shouldn’t be culturally isolated and unaware of current trends. But designers also shouldn’t just mimic everything they see without putting thought into what they are doing. Unfortunately, I think this is what often happens with typeface selection.

How does a typeface first become popular, anyway?

I think it all begins with a forward-thinking designer who takes a chance on a new typeface. She uses it in a design that goes on to garner a lot of attention. Maybe it wins an award and is featured prominently in the design community. Another designer sees it and thinks, “Wow, I’ve never seen that typeface before—I should try using it for something.” From there it just cascades into more and more designers using this “new” typeface. But with each use, less and less thought goes into why they are choosing that particular typeface. In the end, it’s just copying.

Or, a typeface initially becomes popular simply from being in the right place at the right time. When you hear stories about famous YouTubers, there is one thing almost all of them have in common: they got in early. Before the market is saturated, there’s a much greater chance of standing out; your popularity is much more likely to snowball. A few of the most popular typefaces on the web, such as Proxima Nova and Brandon Grotesque, tell a similar story.

The typeface Gotham skyrocketed in popularity after its use in Obama’s 2008 presidential campaign. But although it gained enormous steam in the print world, it wasn’t available as a web font until 2013, when the company then known as Hoefler & Frere-Jones launched its subscription web font service. Proxima Nova, a typeface with a similar look, became available as a web font early, when Typekit launched in 2009. Proxima Nova is far from a Gotham knockoff—an early version, Proxima Sans, was developed before Gotham—but the two typefaces share a related, geometric aesthetic. Many corporate identities used Gotham, so when it came time to bring that identity to the web, Proxima Nova was the closest available option. This pushed Proxima Nova to the top of the bestseller charts, where it remains to this day.

Brandon Grotesque probably gained traction for similar reasons. It has quite a bit in common with Neutraface, a typeface that is ubiquitous in the offline world—walk into any bookstore and you’ll see it everywhere. Brandon Grotesque was available early on as a web font with simple licensing, whereas Neutraface was not. If you wanted an art-deco-inspired geometric sans serif with a small x-height for your website, Brandon Grotesque was the obvious choice. It beat Neutraface to market on the web and is now one of the most sought-after web fonts. Once a typeface reaches a certain level of popularity, it seems likely that a psychological phenomenon known as the availability heuristic kicks in. According to the availability heuristic, people place much more importance on things that they are easily able to recall. So if a certain typeface immediately comes to mind, then people assume it must be the best option.

For example, Proxima Nova is often thought of as incredibly readable for a sans serif due to its large x-height, low stroke contrast, open apertures, and large counters. And indeed, it works very well for setting body copy. However, there are many other sans serifs that fit that description—Avenir, FF Mark, Gibson, Texta, Averta, Museo Sans, Sofia, Lasiver, and Filson, to name a few. There’s nothing magical about Proxima Nova that makes it more readable than similar typefaces; it’s simply the first one that comes to mind for many designers, so they can’t help but assume it must be the best.

On top of that, the mere-exposure effect suggests that people tend to prefer things simply because they are more familiar with them—the more someone encounters Proxima Nova, the more appealing they tend to find it.

So if we are stuck in a positive feedback loop where popular fonts keep becoming even more popular, how do we break the cycle? There are a few things designers can do.

Strive to make your brand identifiable by just your body text

Even if it’s just something subtle, aim to make the type on your site unique in some way. If a reader can tell they are interacting with your brand solely by looking at the body of an article, then you are doing it right. This doesn’t mean that you should completely lose control and use type just for the sole purpose of standing out. Good type, some say, should be invisible. (Some say otherwise.) Show restraint and discernment. There are many small things you can do to make your type distinctive.

Besides going with a lesser-used typeface for your body text, you can try combining two typefaces (or perhaps three, if you’re feeling frisky) in a unique way. Headlines, dates, bylines, intros, subheads, captions, pull quotes, and block quotes all offer ample opportunity for experimentation. Try using heavier and lighter weights, italics and all-caps. Using color is another option. A subtle background color or a contrasting subhead color can go a long way in making your type memorable.

Don’t make your site look like a generic website template. Be a brand.

Dig deeper on Typekit

There are many other high-quality typefaces available on Typekit besides Proxima Nova and Brandon Grotesque. Spend some time browsing through their library and try experimenting with different options in your mockups. The free plan that comes with your Adobe Creative Cloud subscription gives you access to every single font in their library, so you have no excuse not to at least try to discover something that not everyone else is using.

A good tip is to start with a designer or foundry you like and then explore other typefaces in their catalog. For example, if you’re a fan of the popular slab serif Adelle from TypeTogether, simply click the name of their foundry and you’ll discover gems like Maiola and Karmina Sans. Don’t be afraid to try something that you haven’t seen used before.

Dig deeper on Google Fonts (but not too deep)

As of this writing, there are 708 font families available for free on Google Fonts. There are a few dozen or so really great choices. And then there are many, many more not-so-great choices that lack italics and additional weights and that are plagued by poor kerning. So, while you should be wary of digging too deep on Google Fonts, there are definitely some less frequently used options, such as Alegreya and Fira Sans, that can hold their own against any commercial font.

I fully support the open-source nature of Google Fonts and think that making good type accessible to the world for free is a noble mission. As time goes by, though, the good fonts available on Google Fonts will simply become the next Times New Romans and Arials—fonts that have become so overused that they feel like mindless defaults. So if you rely on Google Fonts, there will always be a limit to how unique and distinctive your brand can be.

Try another web font service such as Fonts.com, Cloud.typography or Webtype

It may have a great selection, but Typekit certainly doesn’t have everything. The Fonts.com library dwarfs the Typekit library, with over 40,000 fonts available. Hoefler & Co.’s high-quality collection of typefaces is only available through their Cloud.typography service. And Webtype offers selections not available on other services.

Self-host fonts from MyFonts, FontShop or Fontspring

Don’t be afraid to self-host web fonts. Serving fonts from your own website really isn’t that difficult and it’s still possible to have a fast-loading website if you self-host. I self-host fonts on Typewolf and my Google PageSpeed Insights scores are 90/100 for mobile and 97/100 for desktop—not bad for an image-heavy site.

MyFonts, FontShop, and Fontspring all offer self-hosting kits that are surprisingly easy to set up. Self-hosting also offers the added benefit of not having to rely on a third-party service that could potentially go down (and take your beautiful typography with it).

Explore indie foundries

Many small and/or independent foundries don’t make their fonts available through the major distributors, instead choosing to offer licensing directly through their own sites. In most cases, self-hosting is the only available option. But again, self-hosting isn’t difficult and most foundries will provide you with all the sample code you need to get up and running.

Here are some great places to start, in no particular order:

What about Massimo Vignelli?

Before I wrap this up, I think it’s worth briefly discussing famed designer Massimo Vignelli’s infamous handful-of-basic-typefaces advice (PDF). John Boardley of I Love Typography has written an excellent critique of Vignelli’s dogma. The main points are that humans have a constant desire for improvement and refinement; we will always need new typefaces, not just so that brands can differentiate themselves from competitors, but to meet the ever-shifting demands of new technologies. And a limited variety of type would create a very bland world.

No doubt there were those in the 16th century who shared Vignelli’s views. Every age is populated by those who think we’ve reached the apogee of progress… Vignelli’s beloved Helvetica, . . . would never have existed but for our desire to do better, to progress, to create.
John Boardley, “The Vignelli Twelve”

Are web fonts the best choice for every website?

Not necessarily. There are some instances where accessibility and site speed considerations may trump branding—in that case, it may be best just to go with system fonts. Georgia is still a pretty great typeface, and so are newer system UI fonts likes San Francisco, Roboto/Noto, and Segoe.

But if you’re working on a project where branding is important, don’t ignore the importance of type. We’re bombarded by more content now than at any other time in history; having a distinctive brand is more critical than ever.

90 percent of design is typography. And the other 90 percent is whitespace.
Jeffrey Zeldman, “The Year in Design”

As designers, ask yourselves: “Is this truly the best typeface for my project? Or am I just using it to be safe, or out of laziness? Will it make my brand memorable, or will my site blend in with every other site out there?” The choice is yours. Dig deep, push your boundaries, and experiment. There are thousands of beautiful and functional typefaces out there—go use them!

News stories from Tuesday 10 May, 2016

Favicon for Ramblings of a web guy 23:58 Don't say ASAP when you really mean DEADIN » Post from Ramblings of a web guy Visit off-site link
I have found that people tend to use the acronym ASAP incorrectly. ASAP stands for As Soon As Possible. The most important part of that phrase to me is As Possible. Sometimes, it's only possible to get something done 3 weeks from now due to other priorities. Or, to do it correct, it will take hours or days. However, some people don't seem to get this concept. Here are a couple of examples I found on the web.

The Problem with ASAP

What ‘ASAP’ Really Means

ASAP is toxic, avoid it As Soon As Possible

ASAP

It's not the fault of those writers. The world in general seems to be confused on this. Not everyone is confused though. I found ASAP — What It REALLY Means which does seem to get the real meaning.

At DealNews, we struggled with the ambiguity surrounding this acronym. To resolve this, we coined our own own phrase and acronym to represent what some people seem to think ASAP means.

DEADIN:
Drop
Everything
And
Do
It
Now

We use this when something needs to be done right now. It can't wait. The person being asked to DEADIN a task needs to literally drop what they are doing and do this instead. This is a much clearer term than ASAP.

With this new acronym in your quiver, you can better determine the importance of a task. Now, when someone asks you to do something ASAP, you can ask "Is next Tuesday OK?" Or you can tell them it will take 10 hours to do it right. If they are okay with those answers, they really did mean ASAP. If they are not, you can ask them if you should "Drop Everything And Do It Now". (Pro tip: It still make 10 hours to to right. Don't compromise the quality of your work.)
Favicon for A List Apart: The Full Feed 16:00 Never Show A Design You Haven’t Tested On Users » Post from A List Apart: The Full Feed Visit off-site link

It isn’t hard to find a UX designer to nag you about testing your designs with actual users. The problem is, we’re not very good at explaining why you should do user testing (or how to find the time). We say it like it’s some accepted, self-explanatory truth that deep down, any decent human knows is the right thing to do. Like “be a good person” or “be kind to animals.” Of course, if it was that self-evident, there would be a lot more user testing in this world.

Let me be very specific about why user testing is essential. As long as you’re in the web business, your work will be exposed to users.

If you’re already a user-testing advocate, that may seem obvious, but we often miss something that’s not as clear: how user testing impacts stakeholder communication and how we can ensure testing is built into projects, even when it seems impossible.

The most devilish usability issues are those that haven’t even occurred to you as potential problems; you won’t find all the usability issues just by looking at your design. User testing is a way to be there when it happens, to make sure the stuff you created actually works as you intended, because best practices and common sense will get you only so far. You need to test if you want to innovate, otherwise, it’s difficult to know whether people will get it. Or want it. It’s how you find out whether you’ve created something truly intuitive.

How testing up front saves the day

Last fall, I was going to meet with one of our longtime clients, the charity and NGO Plan International Norway. We had an idea for a very different sign-up form than the one they were using. What they already had worked quite well, so any reasonable client would be a little skeptical. Why fix it if it isn’t broken, right? Preparing for the meeting, we realized our idea could be voted down before we had the chance to try it out.

We decided to quickly put together a usability test before we showed the design.

At the meeting, we began by presenting the results of the user test rather than the design itself.

We discussed what worked well, and what needed further improvement. The conversation that followed was rational and constructive. Together, we and our partners at Plan discussed different ways of improving the first design, rather than nitpicking details that weren’t an issue in the test. It turned out to be one of the best client meetings I’ve ever had.

Panels of photos depicting the transition from hand-drawn sketch to digital mockup

We went from paper sketch to Illustrator sketch to InVision in a day in order to get ready for the test.

User testing gives focus to stakeholder feedback

Naturally, stakeholders in any project feel responsible for the end result and want to discuss suggestions, solutions, and any concerns about your design. By testing the design beforehand, you can focus on the real issues at hand.

Don’t worry about walking into your client meeting with a few unsolved problems. You don’t need to have a solution for every user-identified issue. The goal is to show your design, make clear what you think needs fixing, and ideally, bring a new test of the improved design to the next meeting.

By testing and explaining the problems you’ve found, stakeholders can be included in suggesting solutions, rather than hypothesizing about what might be problems. This also means that they can focus on what they know and are good at. How will this work with our CRM system? Will we be able to combine this approach with our annual campaign?

Since last fall, I’ve been applying this dogma in all the work that I do: never show a design you haven’t tested. We’ve reversed the agenda to present results first, then a detailed walkthrough of the design. So far, our conversations about design and UX have become a lot more productive.

Making room for user testing: sell it like you mean it

Okay, so it’s a good idea to test. But what if the client won’t buy it or the project owner won’t give you the resources? User testing can be a hard sell—I know this from experience. Here are four ways to move past objections.

Don’t make it optional

It’s not unusual to look at the total sum in a proposal, and go, Uhm, this might be a little too much.  So what typically happens? Things that don’t seem essential get trimmed. That usability lab test becomes optional, and we convince ourselves that we’ll somehow persuade the client later that the usability test is actually important.

But how do you convince them that something you made optional a couple of months ago is now really important? The client will likely feel that we’re trying to sell them something they don’t really need.

Describe the objective, not the procedure

A usability lab test with five people often produces valuable—but costly—insight. It also requires resources that don’t go into the test itself: e.g., recruiting and rewarding test subjects, rigging your lab and observation room, making sure the observers from the client are well taken care of (you can’t do that if you’re the one moderating the test), and so on.

Today, rather than putting “usability lab test with five people” in the proposal, I’ll dedicate a few days to: “Quality assurance and testing: We’ll use the methods we deem most suitable at different stages of the process (e.g., usability lab test, guerilla testing, click tests, pluralistic walkthroughs, etc.) to make sure we get it right.”

I have never had a client ask me to scale down the “get it right” part. And even if they do ask you to scale it down, you can still pull it off if you follow the next steps.

Scale down documentation—not the testing

If you think testing takes too much time, it might be because you spend too much time documenting the test. In a lab test, it’s a good idea to have 20 to 30 minutes between each test subject. This gives you time to summarize (and maybe even fix) the things you found in each test before you move on to the next subject. By the end of the day, you have a to-do list. No need to document it any more than that.

List of update notifications in the Slack channel

When user testing the Norwegian Labour party’s new crowdsourcing site, we all contributed our observations straight into our shared Slack channel.

I’ve also found InVision’s comment mode useful for documenting issues discovered in the tests. If we have an HTML and CSS prototype, screenshots of the relevant pages can be added to InVision, with comments placed on top of the specific issues. This also makes it easy for the client to contribute to the discussion.

Screen capture of InVision mockup, with comments from team members attached to various parts of the design

After the test is done, we’ve already fixed some of the problems. The rest ends up in InVision as a to-do on the relevant page. The prototype is actually in HTML, CSS, and JavaSCript, but the visual aspect of InVision’s comment feature make it much easier to avoid misunderstandings.

Scale down the prototype—not the testing

You don’t need a full-featured website or a polished prototype to begin testing.

  • If you’re testing text, you really just need text.
  • If you’re testing a form, you just need to prototype the form.
  • If you wonder if something looks clickable, a flat Photoshop sketch will do.
  • Even a paper sketch will work to see if you’re on the right track.

And if you test at this early stage, you’ll waste much less time later on.

Low-cost, low-effort techniques to get you started

You can do this. Now, I’m going to show you some very specific ways you can test, and some examples from projects I’ve worked on.

Pluralistic walkthrough

  • Time: 15 minutes and up
  • Costs: Free

A pluralistic walkthrough is UX jargon for asking experts to go through the design and point out potential usability issues. But putting five experts in a room for an hour is expensive (and takes time to schedule). Fortunately, getting them in the same room isn’t always necessary.

At the start of a project, I put sketches or screenshots into InVision and post it in our Slack channels and other internal social media. I then ask my colleagues to spend a couple of minutes critiquing it. As easy as that, you’ll be able to weed out (or create hypotheses about) the biggest issues in your design.

Team member comments posted on InVision mockup

Before the usability test, we asked colleagues to comment (using InVision) on what they thought would work or not.

Hit the streets

  • Time: 1–3 hours
  • Costs: Snacks

This is a technique that works well if there’s something specific you want to test. If you’re shy, take a deep breath and get over it. This is by far the most effective way of usability testing if you’re short on resources. In the Labour Party project, we were able to test with seven people and summarize our findings within two hours. Here’s how:

  1. Get a device that’s easy to bring along. In my experience, an iPad is most approachable.
  2. Bring candy and snacks. Works great to have a basket of snacks and put the iPad on the basket too.
  3. Go to a public place with lots of people, preferably a place where people might be waiting (e.g., a station of some sort).
  4. Approach people who look like they are bored and waiting; have your snacks (and iPad) in front of you, and say: “Excuse me, I’m from [company]. Could I borrow a couple of minutes from you? I promise it won’t take more than five minutes. And I have candy!” (This works in Norway, and I’m pretty sure food is a universal language). If you’re working in teams of two, one of you should stay in the background during the approach.
  5. If you’re alone, take notes in between each test. If there are two of you, one person can focus on taking notes while the other is moderating, but it’s still a good idea to summarize between each test.
Two people standing in a public transportation hub, holding a large basket and an iPad

Morten and Ida are about to go to the Central Station in Oslo, Norway, to test the Norwegian Labour Party’s new site for crowdsourcing ideas. Don’t forget snacks!

Online testing tools

  • Time: 30 minutes and up
  • Costs: Most tools have limited free versions. Optimal Workshop charges $149 for one survey and has a yearly plan for $1990.

There isn’t any digital testing tool that can provide the kind of insight you get from meeting real users face-to-face. Nevertheless, digital tools are a great way of going deeper into specific themes to see if you can corroborate and triangulate the data from your usability test.

There are many tools out there, but my two favorites are Treejack and Chalkmark from Optimal Workshop. With Treejack, it rarely takes more than an hour to figure out whether your menus and information architecture are completely off or not. With click tests like Chalkmark, you can quickly get a feel for whether people understand what’s clickable or not.

Screencapture of Illustrator mockup

A Chalkmark test of an early Illustrator mockup of Plan’s new home page. The survey asks: “Where would you click to send a letter to your sponsored child?” The heatmap shows where users clicked.

Diagram combining pie charts and paths

Nothing kills arguments over menus like this baby. With Treejack, you recreate the information architecture within the survey and give users a task to solve. Here we’ve asked: “You wonder how Plan spends its funds. Where would you search for that?” The results are presented as a tree of the paths the users took.

Using existing audience for experiments

  • Time: 30 minutes and up
  • Costs: Free (e.g., using Hotjar and Google Analytics).

One of the things we designed for Plan was longform article pages, binding together a compelling story of text, images, and video. It struck us that these wouldn’t really fit in a usability test. What would the task be? Read the article? And what were the relevant criteria? Time spent? How far he or she scrolled? But what if the person recruited to the test wasn’t interested in the subject? How would we know if it was the design or the story that was the problem, if the person didn’t act as we hoped?

Since we had used actual content and photos (no lorem ipsum!), we figured that users wouldn’t notice the difference between a prototype and the actual website. What if we could somehow see whether people actually read the article when they stumbled upon it in its natural context?

The solution was for Plan to share the link to the prototyped article as if it were a regular link to their website, not mentioning that it was a prototype.

The prototype was set up with Hotjar and Google Analytics. In addition, we had the stats from Facebook Insights. This allowed us to see whether people clicked the link, how much time they spent on the page, how far they scrolled, what they clicked, and even what they did on Plan’s main site if they came from the prototyped article. From this we could surmise that there was no indication of visual barriers (e.g., a big photo making the user think the page was finished), and that the real challenge was actually getting people to click the link in the first place.

Side-by-side images showing the design and the heatmap resulting from user testing

On the left is the Facebook update from Plan. On the right is the heat map from Hotjar, showing how far people scrolled, with no clear drop-out point.

Did you get it done? Was this useful?

  • Time: A few days or a week to set up, but basically no time spent after that
  • Costs: No cost if you build your own; Task Analytics from $950 a month

Sometimes you need harder, bigger numbers to be convincing. This often leads people to A/B testing or Google Analytics, but unless what you’re looking for is increasing a very specific conversion, even these tools can come up short. Often you’d gain more insight looking for something of a middle ground between the pure quantitative data provided by tools like Google Analytics, and the qualitative data of usability tests.

“Was it helpful?” modules are one of those middle-ground options I try to implement in almost all of my projects. Using tools like Google Tag Manager, you can even combine the data, letting you see the pages that have the most “yes” and “no” votes on different parts of your website (content governance dream come true, right?). But the qualitative feedback is also incredibly valuable for suggesting specific things your design is lacking.

Feedback submission buttons

“Was this article helpful?” or “Did you find what you were looking for?” are simple questions that can give valuable insight.

This technique falls short if your users weren’t able to find a relevant article. Those folks aren’t going to leave feedback—they’re going to leave. Google Analytics isn’t of much help there, either. That high bounce rate? In most cases you can only guess why. Did they come and go because they found their answer straight away, or because the page was a total miss? Did they spend a lot of time on the page because it was interesting, or because it was impossible to understand?

My clever colleagues made a tool to answer those kinds of questions. When we do a redesign, we run a Task Analytics survey both before and after launch to figure out not only what the top tasks are, but whether or not people were able to complete their task.

When the user arrives, they’re asked if they want to help out. Then they’re asked to do whatever they came for and let us know when they’re done. When they’re done, we ask a) “What task did you come to do?” and b) “Did you complete the task?”

This gives us data that is actionable and easily understood by stakeholders. At our own website, the most common task people arrive for is to contact an employee, and we learned that one in five will fail. We can fix that. And afterward, we can measure whether or not our fix really worked.

Desktop and mobile screenshots from Task Analytics dashboard

Why do people come to Netlife Research’s website, and do they complete their task? Screenshot from Task Analytics dashboard.

Set up a usability lab and have a weekly drop-in test day

  • Time: 6 hours per project tested + time spent observing the test
  • Costs: rewarding subjects + the minimal costs of setting up a lab

Setting up a usability lab is basically free in 2016:

  • A modern laptop has a microphone and camera built in. No need to buy that.
  • Want to test on mobile? Get a webcam and a flexible tripod or just turn your laptop around
  • Numerous screensharing and video conference tools like Skype, Google Hangout, and GoToMeeting mean there’s no need for hefty audiovisual equipment or mirror windows.
  • Even eyetracking is becoming affordable

Other than that, you just need a room that’s big enough for you and a user. So even as a UX team of one, you can afford your own usability lab. Setting up a weekly drop-in test makes sense for bigger teams. If you’re at twenty people or more, I’d bet it would be a positive return on investment.

My ingenious colleague Are Halland is responsible for the test each week. He does the recruiting, the lab setup, and the moderating. Each test day consists of tests with four different people, and each person typically gets tasks from two to three different projects that Netlife is currently working on. (Read up on why it makes sense to test with so few people.)

By testing two to three projects at a time and having the same person organize it, we can cut down on the time spent preparing and executing the test without cutting out the actual testing.

As a consultant, all I have to do is to let Are know a few days in advance that I need to test something. Usually, I will send a link to the live stream of the test to clients to let them know we’re testing and that they’re welcome to pop in and take a look. A bonus is that clients find it surprisingly rewarding to see other client’s tests and getting other client’s views on their own design (we don’t put competitors in the same test).

This has made it a lot easier to test work on short notice, and it has also reduced the time we have to spend on planning and executing tests.

Two men sitting at a table and working on laptops, with a large screen in the background to display what they are collaborating on

From a drop-in usability test with the Norwegian Labour Party. Eyetracking data on the screen, Morten (Labour Party) and Jørgen (front-end designer) taking notes (and instantly fixing stuff!) on the right.

Testing is designing

As I hope I’ve demonstrated, user testing doesn’t have to be expensive or time-consuming. So what stops us? Personally, I’ve met two big hurdles: building testing into projects to begin with and making a habit out of doing the work.

The critical first step is to make sure that some sort of user testing is part of the approved project plan. A project manager will look at the proposal and make sure we tick that off the list. Eventually, maybe your clients will come asking for it: “But wasn’t there supposed to be some testing in this project?”.

Second, you don’t have to ask for anyone’s permission to test. User testing improves not only the quality of our work, but also the communication within teams and with stakeholders. If you’re tasked with designing something, even if you have just a few days to do it, treat testing as a part of that design task. I’ve suggested a couple of ways to do that, even with limited time and funds, and I hope you’ll share even more tips, tricks, and tools in the comments.

News stories from Monday 09 May, 2016

Favicon for Zach Holman 02:00 The New 10-Year Vesting Schedule » Post from Zach Holman Visit off-site link

While employees have been busy building things, founders and VCs have flipped the industry on its head and aggressively sought to prevent employees from making money from their stock options.

Traditionally, early employees would receive a option grant of a four year vesting schedule with a one year cliff. In other words, your stock would slowly “vest” — become available for you to purchase — over the course of four years, with the first options vesting one year after your hire date, and (usually) monthly after that.

The promise of this is to keep employees at the company for a number of years, since they don’t receive the full weight of their stock until they’ve been there four years.

Companies still hire with a four year vesting schedule, but the whole damn thing is a lie — in practice, people are going to be stuck at a company for much longer than four years if they want to retain the stock they’ve earned.

This stems from two new developments in recent years: companies are staying private longer (the average age of a recently-IPOed tech company is now 11 years old), and companies clamping down on private sales of employee stock after Facebook’s IPO. The impact is best summed up by the recent Handcuffed to Uber article, which effectively means employees can’t leave Uber without either forfeiting a fortune in unexercised stock, or paying a massive tax bill on imaginary, illiquid stock.

An industry run by people who haven’t been an employee in years

The leaders in the industry don’t really face any of the problems that employees face. They don’t even sugarcoat it: it’s pretty remarkable how plainspoken CEOs and VCs are when it comes to going public:

“I’m going to make sure it happens as late as possible,” said Kalanick to CNBC Monday. He added that he had no idea if Uber would go public in the next three to five years.

Don’t Expect an Uber IPO Any Time Soon

and:

“I’m committed to Palantir for the long term, and I’ve advised the company to remain private for as long as it can,” said Mr. Thiel, a billionaire.

Palantir and Investors Spar Over How to Cash In

This is a much harder pill to swallow for those at Palantir, who tends to pay their engineers far below market rate. All this coming from CEO Alex Karp, who attempted to make the case that companies should simultaneously pay their employees less, give them more equity, but don’t allow them to cash that equity out.

Top venture capitalists agree as well:

This is a top VC and luminary advocating for the position that people who end up wanting to make some money on the stock that they’ve worked hard to vest are disloyal. Nothing I’ve read in the last few weeks has made me more furious. We’re now in a position where the four year vesting schedule isn’t enough for these people. They want the four year vesting schedule, and then they want to control your life for the subsequent 4-8 years while they fuck around in the private market.

If you just had a kid and need some additional liquidity, you’re disloyal. If you’d like to pay off your student debt, forget it, we’re not going to incentivize you to do that. If your partner is going back to school and you have to move across the country, tough luck, please turn in your stock options on the way out. If you’ve been busting your ass on a below market-rate salary for years and now you want a bit of what you’ve worked hard to vest, fuck you, go back to work.

Mechanisms of control

There’s obvious things that can be done to help fix this: one of which is getting rid of the 90-day exercise window, which many companies have started to do.

Another is internal stock buybacks, but these are usually low-key and restrictive. Usually you’ll get capped, either on a personal level (you can’t sell back more than x% of your shares) or on a company-wide level (the maximum that this group of employees can sell is xxx,xxx shares).

Or, sometimes these buybacks are limited by tenure: either it’s only for current employees, or you need to be at a company for x years to be able to participate. That’s somewhat reasonable on the surface, but on the other hand it’s en vogue now for unicorns to staff up and add two thousand people in the last three years you’ve worked there. You might end up managing dozens or hundreds of people in the meantime and have a massive impact on the organization, but still can’t sell some stock to avoid all your eggs in one basket, since only people who have been there four years or more can sell.

Another really dicey thing I’ve heard of happening is the following timeline:

  • Company hires a bunch of people
  • Two years pass
  • Company realizes the stock compensation they’re paying these employees is an order of magnitude lower than market average
  • Company gives new grants to employees to, in effect, “make up” for the difference
  • Company grants at a new four year vesting schedule

And that, ladies and gentlemen, is how you sneak a ton of your employees into a de facto six year vesting schedule. A few companies I’ve heard this happening at will give that refresh grant at maybe 10x their initial grant (given how far below market rate their initial grant was), so the employee is effectively stuck for the whole six year ride if they want to retain what they earn. They’ll virtually all go ahead and stick it out, particularly if they weren’t told that this is a catch-up grant — hey, I must be doing really great here, look at how big this second grant is!

Founders of VC-backed companies are insulated from these problems. Once you’ve reached a certain level of success — say, a $100M valuation or unicorn status or some such milestone — it’s expected that your investors will strongly encourage you to take some money off the table between financing rounds so you don’t have to deal with the stress of running a high-growth business while trying to make ends meet.

No one’s yet explained to me, though, why that reasoning works for founders but not for the first employee.

I get wanting to retain people, but strictly using financial levers to do that feels skeezy, and besides, monetary rewards might not be what ultimately motivates people, past a certain point. If you really want to retain your good people, stop building fucking horrible company cultures. You already got your four year vest out of these tenured employees; you can’t move the levers retroactively just because you’re grumpy it’s five years later and you’re not worth a trillion dollars yet.

Public Enemy

There are some people who have been pushing for solutions to these problems.

Mark Cuban’s been pushing the SEC to make a number of changes to make going public easier, and that “it’s worth the hassle to go public”. Mark Zuckerberg’s been pushing that angle as well. And, of course, Fred Wilson had his truly lovely message to Travis Kalanick:

You can’t just say fuck you. Take the goddamn company public.

There are a lot of possible ways to address these problems: taking companies public earlier, being progressive when it comes to exercise windows, doing internal buybacks more often and more permissively, adjusting the tax laws to treat illiquid options differently, and so on. I just don’t know if anyone’s really going to fix it while the people in charge aren’t experiencing the pain.

News stories from Tuesday 03 May, 2016

Favicon for A List Apart: The Full Feed 16:00 Meaningful CSS: Style Like You Mean It » Post from A List Apart: The Full Feed Visit off-site link

These days, we have a world of meaningful markup at our fingertips. HTML5 introduced a lavish new set of semantically meaningful elements and attributes, ARIA defined an entire additional platform to describe a rich internet, and microformats stepped in to provide still more standardized, nuanced concepts. It’s a golden age for rich, meaningful markup.

Yet our markup too often remains a tangle of divs, and our CSS is a morass of classes that bear little relationship to those divs. We nest div inside div inside div, and we give every div a stack of classes—but when we look in the CSS, our classes provide little insight into what we’re actually trying to define. Even when we do have semantic and meaningful markup, we end up redefining it with CSS classes that are inherently arbitrary. They have no intrinsic meaning.

We were warned about these patterns years ago:

In a site afflicted by classitis, every blessed tag breaks out in its own swollen, blotchy class. Classitis is the measles of markup, obscuring meaning as it adds needless weight to every page.
Jeffrey Zeldman, Designing with Web Standards, 1st ed.

Along the same lines, the W3C weighed in with:

CSS gives so much power to the “class” attribute, that authors could conceivably design their own “document language” based on elements with almost no associated presentation (such as DIV and SPAN in HTML) and assigning style information through the “class” attribute… Authors should avoid this practice since the structural elements of a document language often have recognized and accepted meanings and author-defined classes may not. (emphasis mine)

So why, exactly, does our CSS abuse classes so mercilessly, and why do we litter our markup with author-defined classes? Why can’t our CSS be as semantic and meaningful as our markup? Why can’t both be more semantic and meaningful, moving forward in tandem?

Building better objects

A long time ago, as we emerged from the early days of CSS and began building increasingly larger sites and systems, we struggled to develop some sound conventions to wrangle our ever-growing CSS files. Out of that mess came object-oriented CSS.

Our systems for safely building complex, reusable components created a metastasizing classitis problem—to the point where our markup today is too often written in the service of our CSS, instead of the other way around. If we try to write semantic, accessible markup, we’re still forced to tack on author-defined meanings to satisfy our CSS. Both our markup and our CSS reflect a time when we could only define objects with what we had: divs and classes. When in doubt, add more of both. It was safer, especially for older browsers, so we oriented around the most generic objects we could find.

Today, we can move beyond that. We can define better objects. We can create semantic, descriptive, and meaningful CSS that understands what it is describing and is as rich and accessible as the best modern markup. We can define the elephant instead of saying things like .pillar and .waterspout.

Clearing a few things up

But before we turn to defining better objects, let’s back up a bit and talk about what’s wrong with our objects today, with a little help from cartoonist Gary Larson.

Larson once drew a Far Side cartoon in which a man carries around paint and marks everything he sees. “Door” drips across his front door, “Tree” marks his tree, and his cat is clearly labelled “Cat”. Satisfied, the man says, “That should clear a few things up.”

We are all Larson’s label-happy man. We write <table class="table"> and <form class="form"> without a moment’s hesitation. Looking at Github, one can find plenty of examples of <main class="main">. But why? You can’t have more than one main element, so you already know how to reference it directly. The new elements in HTML5 are nearly a decade old now. We have no excuse for not using them well. We have no excuse for not expecting our fellow developers to know and understand them.

Why reinvent the semantic meanings already defined in the spec in our own classes? Why duplicate them, or muddy them?

An end-user may not notice or care if you stick a form class on your form element, but you should. You should care about bloating your markup and slowing down the user experience. You should care about readability. And if you’re getting paid to do this stuff, you should care about being the sort of professional who doesn’t write redundant slop. “Why should I care” was the death rattle of those advocating for table-based layouts, too.

Start semantic

The first step to semantic, meaningful CSS is to start with semantic, meaningful markup. Classes are arbitrary, but HTML is not. In HTML, every element has a very specific, agreed-upon meaning, and so do its attributes. Good markup is inherently expressive, descriptive, semantic, and meaningful.

If and when the semantics of HTML5 fall short, we have ARIA, specifically designed to fill in the gaps. ARIA is too often dismissed as “just accessibility,” but really—true to its name—it’s about Accessible Rich Internet Applications. Which means it’s chock-full of expanded semantics.

For example, if you want to define a top-of-page header, you could create your own .page-header class, which would carry no real meaning. You could use a header element, but since you can have more than one header element, that’s probably not going to work. But ARIA’s [role=banner] is already there in the spec, definitively saying, “This is a top-of-page header.”

Once you have <header role="banner">, adding an extra class is simply redundant and messy. In our CSS, we know exactly what we’re talking about, with no possible ambiguity.

And it’s not just about those big top-level landmark elements, either. ARIA provides a way to semantically note small, atomic-level elements like alerts, too.

A word of caution: don’t throw ARIA roles on elements that already have the same semantics. So for example, don’t write <button role="button">, because the semantics are already present in the element itself. Instead, use [role=button] on elements that should look and behave like buttons, and style accordingly:

button,
[role=button] {
    … 
}

Anything marked as semantically matching a button will also get the same styles. By leveraging semantic markup, our CSS clearly incorporates elements based on their intended usage, not arbitrary groupings. By leveraging semantic markup, our components remain reusable. Good markup does not change from project to project.

Okay, but why?

Because:

  • If you’re writing semantic, accessible markup already, then you dramatically reduce bloat and get cleaner, leaner, and more lightweight markup. It becomes easier for humans to read and will—in most cases—be faster to load and parse. You remove your author-defined detritus and leave the browser with known elements. Every element is there for a reason and provides meaning.
  • On the other hand, if you’re currently wrangling div-and-class soup, then you score a major improvement in accessibility, because you’re now leveraging roles and markup that help assistive technologies. In addition, you standardize markup patterns, making repeating them easier and more consistent.
  • You’re strongly encouraging a consistent visual language of reusable elements. A consistent visual language is key to a satisfactory user experience, and you’ll make your designers happy as you avoid uncanny-valley situations in which elements look mostly but not completely alike, or work slightly differently. Instead, if it looks like a duck and quacks like a duck, you’re ensuring it is, in fact, a duck, rather than a rabbit.duck.
  • There’s no context-switching between CSS and HTML, because each is clearly describing what it’s doing according to a standards-based language.
  • You’ll have more consistent markup patterns, because the right way is clear and simple, and the wrong way is harder.
  • You don’t have to think of names nearly as much. Let the specs be your guide.
  • It allows you to decouple from the CSS framework du jour.

Here’s another, more interesting scenario. Typical form markup might look something like this (or worse):

<form class="form" method="POST" action=".">
	<div class="form-group">
		<label for="id-name-field">What’s Your Name</label>
		<input type="text" class="form-control text-input" name="name-field" id="id-name-field" />
	</div>
	<div class="form-group">
		<input type="submit" class="btn btn-primary" value="Enter" />
	</div>      
</form>

And then in the CSS, you’d see styles attached to all those classes. So we have a stack of classes describing that this is a form and that it has a couple of inputs in it. Then we add two classes to say that the button that submits this form is a button, and represents the primary action one can take with this form.

Common vs. optimal form markup
What you’ve been using What you could use instead Why
.form form Most of your forms will—or at least should—follow consistent design patterns. Save additional identifiers for those that don’t. Have faith in your design patterns.
.form-group form > p or fieldset > p The W3C recommends paragraph tags for wrapping form elements. This is a predictable, recommended pattern for wrapping form elements.
.form-control or .text-input [type=text] You already know it’s a text input.
.btn and .btn-primary or .text-input [type=submit] Submitting the form is inherently the primary action.

Some common vs. more optimal form markup patterns

In light of all that, here’s the new, improved markup.

<form method="POST" action=".">
	<p>
		<label for="id-name-field">What’s Your Name</label>
		<input type="text" name="name-field" id="id-name-field" />
	</p>
	<p>
		<button type="submit">Enter</button>
	</p>
</form>

The functionality is exactly the same.

Or consider this CSS. You should be able to see exactly what it’s describing and exactly what it’s doing:

[role=tab] {
	display: inline-block;
}
[role=tab][aria-selected=true] {
	background: tomato;
}

[role=tabpanel] {
	display: none;
}
[role=tabpanel][aria-expanded=true] {
	display: block;
}

Note that [aria-hidden] is more semantic than a utility .hide class, and could also be used here, but aria-expanded seems more appropriate. Neither necessarily needs to be tied to tabpanels, either.

In some cases, you’ll find no element or attribute in the spec that suits your needs. This is the exact problem that microformats and microdata were designed to solve, so you can often press them into service. Again, you’re retaining a standardized, semantic markup and having your CSS reflect that.

At first glance, it might seem like this would fail in the exact scenario that CSS naming structures were built to suit best: large projects, large teams. This is not necessarily the case. CSS class-naming patterns place rigid demands on the markup that must be followed. In other words, the CSS dictates the final HTML. The significant difference is that with a meaningful CSS technique, the styles reflect the markup rather than the other way around. One is not inherently more or less scalable. Both come with expectations.

One possible argument might be that ensuring all team members understand the correct markup patterns will be too hard. On the other hand, if there is any baseline level of knowledge we should expect of all web developers, surely that should be a solid working knowledge of HTML itself, not memorizing arcane class-naming rules. If nothing else, the patterns a team follows will be clear, established, well documented by the spec itself, and repeatable. Good markup and good CSS, reinforcing each other.

To suggest we shouldn’t write good markup and good CSS because some team members can’t understand basic HTML structures and semantics is a cop-out. Our industry can—and should—expect better. Otherwise, we’d still be building sites in tables because CSS layout is supposedly hard for inexperienced developers to understand. It’s an embarrassing argument.

Probably the hardest part of meaningful CSS is understanding when classes remain helpful and desirable. The goal is to use classes as they were intended to be used: as arbitrary groupings of elements. You’d want to create custom classes most often for a few cases:

  • When there are not existing elements, attributes, or standardized data structures you can use. In some cases, you might truly have an object that the HTML spec, ARIA, and microformats all never accounted for. It shouldn’t happen often, but it is possible. Just be sure you’re not sticking a horn on a horse when you’re defining .unicorn.
  • When you wish to arbitrarily group differing markup into one visual style. In this example, you want objects that are not the same to look like they are. In most cases, they should probably be the same, semantically, but you may have valid reasons for wanting to differentiate them.
  • You’re building it as a utility mixin.

Another concern might be building up giant stacks of selectors. In some cases, building a wrapper class might be helpful, but generally speaking, you shouldn’t have a big stack of selectors because the elements themselves are semantically different elements and should not be sharing all that many styles. The point of meaningful CSS is that you know from your CSS that that button or [role=button] applies to all buttons, but [type=submit] is always the primary action item on the form.

We have so many more powerful attributes at our disposal today that we shouldn’t need big stacks of selectors. To have them would indicate sloppy thinking about what things truly are and how they are intended to be used within the overall system.

It’s time to up our CSS game. We can remain dogmatically attached to patterns developed in a time and place we have left behind, or we can move forward with CSS and markup that correspond to defined specs and standards. We can use real objects now, instead of creating abstract representations of them. The browser support is there. The standards and references are in place. We can start today. Only habit is stopping us.

News stories from Thursday 28 April, 2016

Favicon for Zach Holman 02:00 Evaluating Delusional Startups » Post from Zach Holman Visit off-site link

We’re proven entrepreneurs — one cofounder interned at Apple in 2015, and the other helped organize the annual Stanford wake-and-bake charity smoke-off — who are going to take a huge bite out of the $45 trillion Korean baked vegan food goods delivery market for people who live within one block of Valencia Street (but not towards Mission Street because it’s gross and off-brand), and we’re looking for seasoned rockstars to launch this rocket ship into outer space, come join us, we’re likely backed by one of the venture capitalists you possibly read about in a recent court deposition!

Okay, so they’re always not going to come at you like this. If you’re in the market for a new gig at a hot startup, it’s worthwhile to spend some time thinking about if your sneaking suspicions are correct and the company you’re interviewing with might be full of pretty delusional people.

Here’s a couple traits of delusional startups I’ve been noticing.

I’m gonna make you rich, Bud Fox

After a long afternoon of interviews, I sat down with some head-of-something-rather. Almost verbatim, as well as I can remember it, he dropped this lovely gem in the first four minutes of the conversation:

Now, certainly you’d be joining a rocket ship. And clearly the stock you’d have would make you rich. So what I want to aaaaahhHHHHHHHHHH! thhhwaapkt

The second part of whatever he was saying got swallowed up by the huge Irony Vortex From Six Months In The Future that zipped into existence right next to him, as the Rocket Ship He Was On would promptly implode half a year later.

In my experience, people who promise riches for you, a new hire, fall into two camps:

  • They’re destined to lose it all, or
  • They’re about to become mega rich, and assume the breadcrumbs that fell from the corners of their mouths will also make you mega rich, obviously

Both of those camps are fairly delusional.

Many leaders — unfortunately not all, but that’s life — that have a good chance at striking it rich tend to be pretty realistic, cautious, and optimistically humble about it. In turn, having those personality traits might also lead them to making more generous decisions down the line that would benefit you as well, so that’s also a bonus.

Lately I’ve heard something specific come up from a number of my close friends: the bonus they just received in the first six months from their new job at a large corporate gig far dwarfed the stock proceeds they made from the hot startup they had worked at for years.

People have been saying this for decades, but it’s always worth reiterating: don’t join a startup for the pay, and if someone’s trying to dangle that in front of your eyes, you can tell them to shove their rocket ship up their you-know-where.

The blame game

A company I was interviewing at borked a final interview slot with a head-of-something-such, so I rescheduled them for coffee the following week.

Sipped my tea for half an hour… no show. Hey, it sucks, but miscommunication happens so it wasn’t much to fret over.

The rescheduled phone call another week later started off with an apology that quickly turned into a shitstorm. The main production service was down he said, and therefore he could not attend our coffee, nor could he look up and send me an email about it, even though he did notice it and did briefly feel bad about it. The fucking CEO shat on my team the next day in front of the whole company which was complete bullshit because his team Had Done All The Necessary Things and really it was The CEO’s Dumb Fault The Shit Was All Broken Anyway right? Christ. In any case the position we were interviewing you for has been filled do you want to try for anything else?

So there were a lot of things to unwind here, and I truly do have stories from interviewing at this company that will last me until the end of the sixth Clinton administration, but the real toxic aspect is the:

  • Dude complaining about leadership
  • Leadership blaming specific people and teams across the whole company

Cultures that throw each other under the bus — in either direction, up or down — don’t function as well. The wheels will fall off the wagon at some point, and you’re going to end up with a shit product. You can even be one of those bonkers 120-hour work week startups, grinding hard at all hours of the day, and still be good people to each other. You’ve got to bounce back from setbacks and mistakes. Blameless cultures are better cultures.

On a related note, it’s amazing what you can sometimes get people to admit in an interview. While chatting with another startup, I informally asked what the two employees thought of one of the cofounders. Total shit was the flat response. Doesn’t do jack, and really doesn’t belong in engineering anymore. Props for their openness, I guess, and maybe it helped me dodge a bullet, but how employees talk about others behind their backs says a lot about how cohesive and supportive the company is.

We’re backed by the best VCs, we’re very highly educated, we know product, we have the best product

I don’t understand how you can love your startup’s product.

For me, the high is all about what’s happening next. Can’t wait to ship that new design. The refactoring getting worked on will be an order of magnitude more performant. The wireframes for where we’re hoping to be two years from now is dripping with dopamine.

I don’t understand people who are happy with what they’ve got today. Once you’re happy, you’re in maintenance mode, and maybe that’s fine if you’ve finished your product and are ready to coast on your fat stacks, but by that point you’re beyond building something new anyway. These startups who eagerly float by on shit they did years ago, assuming that rep will carry through any new competition… I just don’t understand that.

Stewart Butterfield has a healthy viewpoint when he talks about Slack:

Are you bringing other changes to Slack?
Oh, God, yeah. I try to instill this into the rest of the team but certainly I feel that what we have right now is just a giant piece of shit. Like, it’s just terrible and we should be humiliated that we offer this to the public.

Certainly he’s being a bit facetious here, since I don’t imagine he thinks the work his employees have done is shit — rather, a product is a process and it takes a long time to chip away the raw marble into the statue inside of it.

The other weird aspect of this that I’ve noticed is that there are some companies who truly hate their competition. I really dig competition, and I think it brings out good stuff across the board, but when it flips into Hatred Of The Enemy it just gets weird. Like c’mon, each of your apps put mustaches on pictures of fish, y’all gotta chill the fuck out, lol.

Asking people what they think about their competition can be a pretty decent measurement of whether the company twiddles the Thumbs of Delusion. If they flatly espouse hatred, that’s weird. If they take a nuanced approach and contrast differences in respective philosophies, that’s promising, because it means they’ve actually thought through what makes them different, and their product and culture likely will be stronger for it.

It also likely just means fewer dicks at the company. You can only deal with so much hatred in life before it sucks you up into a hole.

ymmv

I get that startups are supposed to be — by definition, really — delusional, in some respect. You’re building something that wasn’t there before, and it takes a lot of faith to build a nascent idea up into something big. So you need a leader to basically throw down so everyone can rally behind her.

Maybe I’m an ancient, grizzled old industry fuck now that I’m nearly 31, but I’m weary of seeing the sky-high bonkersmobiles driving around town these days. That’s part of the reason I’m cautiously optimistic about this bubble that will certainly almost certainly okay maybe it’ll pop again soon — it’ll get people a little more realistic about their goals again.

I still think startups are great and can change the world and all that bullshit… I just think it’s worthwhile to stop and think hard about what your potential company is promising you. Catching these things early on in the process can help save you a ton of pain down the road.

And if you’re hearing these things at your current company, well, good luck! You’re assuredly already on a rocket ship, surely, so congrats!

News stories from Wednesday 27 April, 2016

Favicon for Kopozky 14:32 Evolution » Post from Kopozky Visit off-site link

Comic strip: “Evolution”

Starring: The Developer


News stories from Tuesday 26 April, 2016

Favicon for A List Apart: The Full Feed 16:00 Prototypal Object-Oriented Programming using JavaScript » Post from A List Apart: The Full Feed Visit off-site link

Douglas Crockford accurately described JavaScript as the world’s most misunderstood language. A lot of programmers tend to think of it as not a “proper” language because it lacks the common object-oriented programming concepts. I myself developed the same opinion after my first JavaScript project ended up a hodgepodge, as I couldn’t find a way to organize code into classes. But as we will see, JavaScript comes packed with a rich system of object-oriented programming that many programmers don’t know about.

Back in the time of the First Browser War, executives at Netscape hired a smart guy called Brendan Eich to put together a language that would run in the browser. Unlike class-based languages like C++ and Java, this language, which was at some point called LiveScript, was designed to implement a prototype-based inheritance model. Prototypal OOP, which is conceptually different from the class-based systems, had been invented just a few years before to solve some problems that class-based OOP presented and it fit very well with LiveScript’s dynamic nature.

Unfortunately, this new language had to “look like Java” for marketing reasons. Java was the cool new thing in the tech world and Netscape’s executives wanted to market their shiny new language as “Java’s little brother.” This seems to be why its name was changed to JavaScript. The prototype-based OOP system, however, didn’t look anything like Java’s classes. To make this prototype-based system look like a class-based system, JavaScript’s designers came up with the keyword new and a novel way to use constructor functions. The existence of this pattern and the ability to write “pseudo class-based” code has led to a lot of confusion among developers.

Understanding the rationale behind prototype-based programming was my “aha” moment with JavaScript and resolved most of the gripes I had with the language. I hope learning about prototype-based OOP brings you the same peace of mind it brought me. And I hope that exploring a technique that has not been fully explored excites you as much as it excites me.

Prototype-based OOP

Conceptually, in class-based OOP, we first create a class to serve as a “blueprint” for objects, and then create objects based on this blueprint. To build more specific types of objects, we create “child” classes; i.e., we make some changes to the blueprint and use the resulting new blueprint to construct the more specific objects.

For a real-world analogy, if you were to build a chair, you would first create a blueprint on paper and then manufacture chairs based on this blueprint. The blueprint here is the class, and chairs are the objects. If you wanted to build a rocking chair, you would take the blueprint, make some modifications, and manufacture rocking chairs using the new blueprint.

Now take this example into the world of prototypes: you don’t create blueprints or classes here, you just create the object. You take some wood and hack together a chair. This chair, an actual object, can function fully as a chair and also serve as a prototype for future chairs. In the world of prototypes, you build a chair and simply create “clones” of it. If you want to build a rocking chair, all you have to do is pick a chair you’ve manufactured earlier, attach two rockers to it, and voilà! You have a rocking chair. You didn’t really need a blueprint for that. Now you can just use this rocking chair for yourself, or perhaps use it as a prototype to create more rocking chairs.

JavaScript and prototype-based OOP

Following is an example that demonstrates this kind of OOP in JavaScript. We start by creating an animal object:

var genericAnimal = Object.create(null);

Object.create(null) creates a new empty object. (We will discuss Object.create() in further detail later.) Next, we add some properties and functions to our new object:

genericAnimal.name = 'Animal';
genericAnimal.gender = 'female';
genericAnimal.description = function() {
	return 'Gender: ' + this.gender + '; Name: ' + this.name;
};

genericAnimal is a proper object and can be used like one:

console.log(genericAnimal.description());
//Gender: female; Name: Animal

We can create other, more specific animals by using our sample object as a prototype. Think of this as cloning the object, just like we took a chair and created a clone in the real world.

var cat = Object.create(genericAnimal);

We just created a cat as a clone of the generic animal. We can add properties and functions to this:

cat.purr = function() {
	return 'Purrrr!';
};

We can use our cat as a prototype and create a few more cats:

var colonel = Object.create(cat);
colonel.name = 'Colonel Meow';

var puff = Object.create(cat);
puff.name = 'Puffy';

You can also observe that properties/methods from parents were properly carried over:

console.log(puff.description());
//Gender: female; Name: Puffy

The new keyword and the constructor function

JavaScript has the concept of a new keyword used in conjunction with constructor functions. This feature was built into JavaScript to make it look familiar to people trained in class-based programming. You may have seen JavaScript OOP code that looks like this:

function Person(name) {
	this.name = name;
	this.sayName = function() {
		return "Hi, I'm " + this.name;
	};
}
var adam = new Person('Adam');

Implementing inheritance using JavaScript’s default method looks more complicated. We define Ninja as a sub-class of Person. Ninjas can have a name as they are a person, and they can also have a primary weapon, such as shuriken.

function Ninja(name, weapon) {
  Person.call(this, name);
  this.weapon = weapon;
}
Ninja.prototype = Object.create(Person.prototype);
Ninja.prototype.constructor = Ninja;

While the constructor pattern might look more attractive to an eye that’s familiar with class-based OOP, it is considered problematic by many. What’s happening behind the scenes is prototypal OOP, and the constructor function obfuscates the language’s natural implementation of OOP. This just looks like an odd way of doing class-based OOP without real classes, and leaves the programmer wondering why they didn’t implement proper class-based OOP.

Since it’s not really a class, it’s important to understand what a call to a constructor does. It first creates an empty object, then sets the prototype of this object to the prototype property of the constructor, then calls the constructor function with this pointing to the newly-created object, and finally returns the object. It’s an indirect way of doing prototype-based OOP that looks like class-based OOP.

The problem with JavaScript’s constructor pattern is succinctly summed up by Douglas Crockford:

JavaScript’s constructor pattern did not appeal to the classical crowd. It also obscured JavaScript’s true prototypal nature. As a result, there are very few programmers who know how to use the language effectively.

The most effective way to work with OOP in JavaScript is to understand prototypal OOP, whether the constructor pattern is used or not.

Understanding delegation and the implementation of prototypes

So far, we’ve seen how prototypal OOP differs from traditional OOP in that there are no classes—only objects that can inherit from other objects.

Every object in JavaScript holds a reference to its parent (prototype) object. When an object is created through Object.create, the passed object—meant to be the prototype for the new object—is set as the new object’s prototype. For the purpose of understanding, let’s assume that this reference is called __proto__1. Some examples from the previous code can illustrate this point:

The line below creates a new empty object with __proto__ as null.

var genericAnimal = Object.create(null); 

The code below then creates a new empty object with __proto__ set to the genericAnimal object, i.e. rodent.__proto__ points to genericAnimal.

var rodent = Object.create(genericAnimal);
 rodent.size = 'S';

The following line will create an empty object with __proto__ pointing to rodent.

var capybara = Object.create(rodent);
//capybara.__proto__ points to rodent
//capybara.__proto__.__proto__ points to genericAnimal
//capybara.__proto__.__proto__.__proto__ is null

As we can see, every object holds a reference to its prototype. Looking at Object.create without knowing what exactly it does, it might look like the function actually “clones” from the parent object, and that properties of the parent are copied over to the child, but this is not true. When capybara is created from rodent, capybara is an empty object with only a reference to rodent.

But then—if we were to call capybara.size right after creation, we would get S, which was the size we had set in the parent object. What blood-magic is that? capybara doesn’t have a size property yet. But still, when we write capybara.size, we somehow manage to see the prototype’s size property.

The answer is in JavaScript’s method of implementing inheritance: delegation. When we call capybara.size, JavaScript first looks for that property in the capybara object. If not found, it looks for the property in capybara.__proto__. If it didn’t find it in capybara.__proto__, it would look in capybara.__proto__.__proto__. This is known as the prototype chain.

If we called capybara.description(), the JavaScript engine would start searching up the prototype chain for the description function and finally discover it in capybara.__proto__.__proto__ as it was defined in genericAnimal. The function would then be called with this pointing to capybara.

Setting a property is a little different. When we set capybara.size = 'XXL', a new property called size is created in the capybara object. Next time we try to access capybara.size, we find it directly in the object, set to 'XXL'.

Since the prototype property is a reference, changing the prototype object’s properties at runtime will affect all objects using the prototype. For example, if we rewrote the description function or added a new function in genericAnimal after creating rodent and capybara, they would be immediately available for use in rodent and capybara, thanks to delegation.

Creating Object.create

When JavaScript was developed, its default way of creating objects was the keyword new. Then many notable JavaScript developers campaigned for Object.create, and eventually it was included in the standard. However, some browsers don’t support Object.create (you know the one I mean). For that reason, Douglas Crockford recommends including the following code in your JavaScript applications to ensure that Object.create is created if it is not there:

if (typeof Object.create !== 'function') {
	Object.create = function (o) {
		function F() {}
		F.prototype = o;
		return new F();
	};
}

Object.create in action

If you wanted to extend JavaScript’s Math object, how would you do it? Suppose that we would like to redefine the random function without modifying the original Math object, as other scripts might be using it. JavaScript’s flexibility provides many options. But I find using Object.create a breeze:

var myMath = Object.create(Math);

Couldn’t possibly get any simpler than that. You could, if you prefer, write a new constructor, set its prototype to a clone of Math, augment the prototype with the functions you like, and then construct the actual object. But why go through all that pain to make it look like a class, when prototypes are so simple?

We can now redefine the random function in our myMath object. In this case, I wrote a function that returns random whole numbers within a range if the user specifies one. Otherwise, it just calls the parent’s random function.

myMath.random = function() {
	var uber = Object.getPrototypeOf(this);
if (typeof(arguments[0]) === 'number' && typeof(arguments[1]) === 'number' && arguments[0] 

There! Now myMath.random(-5,5) gets you a random whole number between −5 and 5, while myMath.random() gets the usual. And since myMath has Math as its prototype, it has all the functionality of the Math object built into it.

Class-based OOP vs. prototype-based OOP

Prototype-based OOP and class-based OOP are both great ways of doing OOP; both approaches have pros and cons. Both have been researched and debated in the academic world since before I was born. Is one better than the other? There is no consensus on that. But the key points everyone can agree on are that prototypal OOP is simpler to understand, more flexible, and more dynamic.

To get a glimpse of its dynamic nature, take the following example: you write code that extensively uses the indexOf function in arrays. After writing it all down and testing in a good browser, you grudgingly test it out in Internet Explorer 8. As expected, you face problems. This time it’s because indexOf is not defined in IE8.

So what do you do? In the class-based world, you could solve this by defining the function, perhaps in another “helper” class which takes an array or List or ArrayList or whatever as input, and replacing all the calls in your code. Or perhaps you could sub-class the List or ArrayList and define the function in the sub-class, and use your new sub-class instead of the ArrayList.

But JavaScript and prototype-based OOP’s dynamic nature makes it simple. Every array is an object and points to a parent prototype object. If we can define the function in the prototype, then our code will work as is without any modification!

if (!Array.prototype.indexOf) {
	Array.prototype.indexOf = function(elem) {
		//Your magical fix code goes here.
};
}

You can do many cool things once you ditch classes and objects for JavaScript’s prototypes and dynamic objects. You can extend existing prototypes to add new functionality—extending prototypes like we did above is how the well known and aptly named library Prototype.js adds its magic to JavaScript’s built-in objects. You can create all sorts of interesting inheritance schemes, such as one that inherits selectively from multiple objects. Its dynamic nature means you don’t even run into the problems with inheritance that the Gang of Four book famously warns about. (In fact, solving these problems with inheritance was what prompted researchers to invent prototype-based OOP—but all that is beyond our scope for this article.)

Class-based OOP emulation can go wrong

Consider the following very simple example written with pseudo-classes:

function Animal(){
    this.offspring=[];
}

Animal.prototype.makeBaby = function(){ 
    var baby = new Animal();
    this.offspring.push(baby);
    return baby;
};

//create Cat as a sub-class of Animal
function Cat() {
}

//Inherit from Animal
Cat.prototype = new Animal();

var puff = new Cat();
puff.makeBaby();
var colonel = new Cat();
colonel.makeBaby();

The example looks innocent enough. This is an inheritance pattern that you will see in many places all over the internet. However, something funny is going on here—if you check colonel.offspring and puff.offspring, you will notice that each of them contains the same two babies! That’s probably not what you intended—unless you are coding a quantum physics thought experiment.

JavaScript tried to make our lives easier by making it look like we have good old class-based OOP going on. But it turns out it’s not that simple. Simulating class-based OOP without completely understanding prototype-based OOP can lead to unexpected results. To understand why this problem occurred, you must understand prototypes and how constructors are just one way to build objects from other objects.

What happened in the above code is very clear if you think in terms of prototypes. The variable offspring is created when the Animal constructor is called—and it is created in the Cat.prototype object. All individual objects created with the Cat constructor use Cat.prototype as their prototype, and Cat.prototype is where offspring resides. When we call makeBaby, the JavaScript engine searches for the offspring property in the Cat object and fails to find it. It then finds the property in Cat.prototype—and adds the new baby in the shared object that both individual Cat objects inherit from.

So now that we understand what the problem is, thanks to our knowledge of the prototype-based system, how do we solve it? The solution is that the offspring property needs to be created in the object itself rather than somewhere in the prototype chain. There are many ways to solve it. One way is that makeBaby ensures that the object on which the function is called has its own offspring property:

Animal.prototype.makeBaby=function(){
	var baby=new Animal(); 
	if(!this.hasOwnProperty('offspring')){
		this.offspring=[]; }
	this.offspring.push(baby); 
	return baby;
};

Backbone.js runs into a similar trap. In Backbone.js, you build views by extending the base Backbone.View “class.” You then instantiate views using the constructor pattern. This model is very good at emulating class-based OOP in JavaScript:

//Create a HideableView "sub-class" of Backbone.View
var HideableView = Backbone.View.extend({
    el: '#hideable', //the view will bind to this selector
    events : {
        'click .hide': 'hide'
    },
    //this function was referenced in the click handler above
    hide: function() {
      //hide the entire view
    	$(this.el).hide();
    }
});

var hideable = new HideableView();

This looks like simple class-based OOP. We inherited from the base Backbone.View class to create a HideableView child class. Next, we created an object of type HideableView.

Since this looks like simple class-based OOP, we can use this functionality to conveniently build inheritance hierarchies, as shown in the following example:

var HideableTableView = HideableView.extend({
    //Some view that is hideable and rendered as a table.
});

var HideableExpandableView = HideableView.extend({
    initialize: function() {
        //add an expand click handler. We didn’t create a separate
        //events object because we need to add to the
        //inherited events.
        this.events['click .expand'] = 'expand';
    },
    expand: function () {
    	//handle expand
    }
});

var table = new HideableTableView();
var expandable = new HideableExpandableView();

This all looks good while you’re thinking in class-based OOP. But if you try table.events['click .expand'] in the console, you will see “expand”! Somehow, HideableTableView has an expand click handler, even though it was never defined in this class.

You can see the problem in action here: http://codepen.io/anon/pen/qbYJeZ

The problem above occurred because of the same reason outlined in the earlier example. In Backbone.js, you need to work against the indirection created by trying to make it look like classes, to see the prototype chain hidden in the background. Once you comprehend how the prototype chain would be structured, you will be able to find a simple fix for the problem.

In conclusion

Despite prototypal OOP underpinning one of the most popular languages out there today, programmers are largely unfamiliar with what exactly prototype-based OOP is. JavaScript itself may be partly to blame because of its attempts to masquerade as a class-based language.

This needs to change. To work effectively with JavaScript, developers need to understand the how and why of prototype-based programming—and there’s much more to it than this article. Beyond mastering JavaScript, in learning about prototype-based programming you can also learn a lot of things about class-based programming as you get to compare and contrast the two different methods.

Further Reading

Douglas Crockford’s note on protoypal programming was written before Object.create was added to the standard.

An article on IBM’s developerWorks reinforces the same point on prototypal OOP. This article was the prototypal “aha” moment for me.

The following three texts will be interesting reads if you’re willing to dive into the academic roots of prototype-based programming:

Henry Lieberman of MIT Media Labs compares class-based inheritance with prototype-based delegation and argues that prototype-based delegation is the more flexible of the two concepts.

Classes versus Prototypes in Object-Oriented Languages is a proposal to use prototypes instead of classes by the University of Washington’s Alan Borning.

Lieberman’s and Borning’s work in the 1980s appears to have influenced the work that David Ungar and Randall Smith did to create the first prototype-based programming language: Self. Self went on to become the basis for the prototype-based system in JavaScript. This paper describes their language and how it omits classes in favor of prototypes.

 

Footnotes

  • 1. The __proto__ property is used by some browsers to expose an object’s prototype, but it is not standard and is considered obsolete. Use Object.getPrototypeOf() as a standards-compliant way of obtaining an object’s prototype in modern browsers.

News stories from Friday 01 April, 2016

Favicon for Grumpy Gamer 15:45 Hey, guess what day it is... » Post from Grumpy Gamer Visit off-site link

That rights, it's the day the entire Internet magically think it's funny.

Pro-tip: You're not.

As Grumpy Gamer has been for going on twelve years, we're 100% April Fools' Day joke free.

I realize that's kind of ironic to say, since this blog is pretty-much everything free these days as I'm spending all my time blogging about Thimbleweed Park, the new point & click adventure game I'm working on.

And no, that is not a joke, check it out.

News stories from Monday 28 March, 2016

Favicon for Kopozky 10:27 Comme Il Faut » Post from Kopozky Visit off-site link

Comic strip: “Comme Il Faut”

Starring: The Designer and his girl-friend


News stories from Wednesday 16 March, 2016

Favicon for Zach Holman 02:00 Firing People » Post from Zach Holman Visit off-site link

So it’s been a little over a year since GitHub fired me.

I initially made a vague tweet about leaving the company, and then a few weeks later I wrote Fired, which made it pretty clear that leaving the company was involuntary.

The reaction to that post was pretty interesting. It hit 100,000 page views within the first few days after publishing, spurred 389 comments on Hacker News, and indeed, is currently the 131st most-upvoted story on Hacker News of all time.

Let me just say one thing first: it’s pretty goddamn weird to have so many people interested in discussing one of your biggest professional failures. There were a few hard-hitting Real Professional Journalists out there launching some bombs from the 90 yard line, too:

If an employer has decided to fire you, then you’ve not only failed at your job, you’ve failed as a human being.

and

Why does everyone feel compelled to live their life in the public? Shut up and sit down! You ain’t special, dear..

and

Who is the dude?

You and me both, buddy. I ask myself that every day.


The vast majority of the comments were truly lovely, though, as well as the hundreds of emails I got over the subsequent days. Over and over again it became obvious at how commonplace getting fired and getting laid off is. Everyone seemingly has a story about something they fucked up, or about someone that fucked them up. This is not a rare occurrence, and yet no one ever talks about it publicly.

As I stumbled through the rest of 2015, though, something that bothered me at the onset crept forward more and more: the post, much like the initial vague tweet, didn’t say anything. That was purposeful, of course; I was still processing what the whole thing meant to me, and what it could mean.

I’ve spent the last year constantly thinking about it over and over and over. I’ve also talked to hundreds and hundreds of people about the experience and about their experiences, ranging from the relatively unknown developer getting axed to executives getting pushed out of Fortune 500 companies.

It bothers me no one really talks about this. We come up with euphemisms, like “funemployment!” and “finding my next journey!”, while all the while ignoring the real pains associated with getting forced out of a company. And christ, there’s a lot of real pain that can happen.

How can we start fixing these problems if we can’t even talk about them?

Me speaking at Bath Ruby

I spoke this past week at Bath Ruby 2016, in Bath, England. The talk was about my experiences leaving GitHub, as well as the experiences of so many of the people I’ve talked to and studied over the last year. You can follow along with the slide deck if you’d like, or wait for the full video of the talk to come out in the coming weeks.

I also wanted to write a companion piece as well. There’s just a lot that can’t get shoehorned into a time-limited talk. That’s what you’re reading right now. So curl up by the fire, print out this entire thing onto like a bajillion pages of dead tree pulp, and prepare to read a masterpiece about firing people. Once you realize that you’re stuck with this drivel, you can toss the pages onto the fire and start reading this on your iPad instead.


The advice people most readily give out on this topic today is:

🚒🔥FIRE FAST 🔥🚒

“Fire fast”, they say! You have to fire fast because we’re moving really fuckin’ fast and we don’t have no time to deal with no shitty people draggin’ us down! Move fast and break people! Eat a big fat one, we’re going to the fuckin’ MOOOOOOOOON!

What the shit does that even mean, fire fast? Should I fire people four minutes after I hire them? That’ll show ‘em!

What about after a mistake? Should we fire people as retribution? Do people get second chances?

When we fire people, how do we handle things like continuity of insurance? Or details like taxes, stock, and follow-along communication? How do we handle security concerns when someone leaves an organization?

There’s a lot of advice that’s needed beyond fire fast. “Move fast and break people” doesn’t make any goddamn sense to me.

I’ve heard a lot of funny stories from people in the last year. From the cloud host employee who accidentally uploaded a pirated TV show to company servers and got immediately fired his second week on the job (“oops!” he remarked in hindsight) to the Apple employee who liked my initial post but “per company policy I’m not allowed to talk about why your post may or may not be relevant to me”.

I’ve also heard a lot of sad stories too. From someone whose board pushed them out of their own startup, but was forced to say they resigned for the sake of appearance:

There aren’t adjectives to explain the feeling when your baby tells you it doesn’t want/need you any more.

We might ask: why should we even care about this? They are ex-employees, after all. To quote from the seminal 1999 treatise on corporate technology management/worker relations, Office Space:

The answer, of course, is: we should care about all this because we’re human beings, dammit. How we treat employees, past and present, is a reflection on the company itself. Great companies care deeply about the relationship they maintain with everyone who has contributed to the success of the company.

This is kind of a dreary subject, but don’t worry too much: I’m going to aspire to make this piece as funny and as light-hearted as I can. It’s also going to be pretty long, but that’s okay, sometimes long things are worth it. (Haha dick joke, see? See what I’m doing here? God these jokes are going to doom us all.)

Perspectives

One last thing before we can finally ditch from these long-winded introductory sections: what you’re going to be reading is primarily my narrative, with support from many, many other stories hung off of the broader points.

Listen: I’m not super keen on doing this. I don’t particularly want to make this all about me, or about my experiences getting fired or quitting from any of my previous places of employment. This is a particularly depressing aspect in my life, and even a year later I’m still trying to cope with as much depression as anyone can really reasonably deal with.

But I don’t know how to talk about this in the abstract. The specifics are really where all the important details are. You need the specifics to understand the pain.

As such, this primarily comes at the problem from a specific perspective: an American living in San Francisco for a California-based tech startup.

When I initially wrote my first public “I’m fired!” post, some of you in more-civilized places with strong employee-friendly laws like Germany or France were aghast: who did I murder to get fired from my job? How many babies did I microwave to get to that point? Am I on a watchlist for even asking you that question?

California, though, is an at-will state. Employees can be fired for pretty much any reason. If your boss doesn’t like the color of shoes you’re wearing that day, BOOM! Fired. If they don’t like how you break down oxygen using your lungs in order to power your feeble human body, BOOM! Fired. Totally cool. As long as they’re not discriminating against federally-protected classes — religion, race, gender, disability, etc. — they’re in the clear.

Not all of you are working for companies like this. That’s okay — really, that’s great! — because I still think this touches on a lot of really broad points relevant to everyone. As I was building this talk out, I ended up noticing a ton of crossover with generally leaving a company, be it intentionally, unintentionally, on friendly terms, and on hostile terms. Chances are you’re not going to be at your company forever, so a lot of this is going to be helpful for you to start thinking about now, even if you ultimately don’t leave until years in the future.

Beyond that, I tried to target three different perspectives throughout all this, and I’ll call them out in separately-colored sections as well:

You

You: your perspective. If you ever end up in the hot seat and realize you’re about to get fired, this talk is primarily for you. There’s a lot of helpful hints for you to take into consideration in the moment, but also for the immediate future as well.

Company

Company: from the perspective of the employer. Again, the major thing I’m trying to get across is to normalize the idea of termination of employment. I’m not trying to demonize the employer at all, because there are a lot of things the employer can do to really help the new former employee out and to help the company out as well. I’ll make a note of them in these blocks.

Coworker

Coworker: the perspective that’s really not considered very much is the coworker’s perspective. Since they’re not usually involved in the termination itself, a lot of times it’s out of sight, out of mind. That’s a bit unfortunate, because there’s also some interesting aspects that can be helpful to keep in mind in the event that someone you work with gets fired.

Got it? Okay, let’s get into the thick of things.

Backstory

I’m Zach Holman. I was number nine at GitHub, and was there between 2010 and 2015. I saw it grow to 250 employees (they’ve since doubled in size and have grown to 500 in the last year).

I’m kind of at the extreme end of the spectrum when it comes to leaving a company, which can be helpful for others for the purposes of taking lessons away from an experience. It had been a company I had truly grown to love, and in many ways I had been the face of GitHub, as I did a lot of talks and blog posts that mentioned my experiences there. More than once I had been confusingly introduced as a founder or CEO of the company. That, in part, was how I ultimately was able to sneak into the Andreessen Horowitz corporate apartments and stayed there rent-free for sixteen months. I currently have twelve monogrammed a16z robes in my collection, and possibly was involved in mistakenly giving the greenlight to a Zenefits employee who came by asking if they could get an additional key to the stairwell for a… meeting.

Fast forward to summer of 2014: I had been the top committer to the main github/github repository for the last two years, I had just led the team that shipped one of the last major changes to the site, and around that time I had had a mid-year performance review with my manager that was pretty glowing and had resulted in me receiving one of the largest refresh grants they had given during that review period.

This feels a little self-congratulatory to write now, of course, but I’ll lend you a quick reminder: I did get fired nonetheless, ha. The point I’m trying to put across with all this babble is that on the surface, I was objectively one of the last employees one might think to get fired in the subsequent six months. But everyone’s really at risk: unless you own the company, the company owns you.

Around the start of the fall, though, I had started feeling pretty burnt out. I had started to realize that I hadn’t taken a vacation in five years. Sure, I’d been out of town, and I’d even ostensibly taken time off to have some “vacations”, but in hindsight they were really anything but: I’d still be checking email, I’d still be checking every single @github mention on Twitter, and I’d still dip into chat from time to time. Mentally, I would still be in the game. That’s a mistake I’ll never make again, because though I had handled it well for years — and even truly enjoyed it — it really does grind you down over time. Reading virtually every mention of your company’s name on Twitter for five straight years is exhausting.

By the time November came around, I was looking for a new long-term project to take on. I took a week offsite with three other long-tenured GitHubbers and we started to tackle a very large new product, but I think we were all pretty well burnt out by then. By the end of the week it was clear to me how fried I was; brainstorming should not have been that difficult.

I chatted with the CEO at this point about things. He’s always been pretty cognizant of the need for a good work/life balance, and encouraged taking an open-ended sabbatical away from work for awhile.

My preference would be for you to stay at GitHub […] When you came back would be totally up to you

By February, my manager had sent me an email with the following:

Before agreeing to your return […] we need to chat through some things

You

First thing here from your perspective is to be wary if the goalposts are getting moved on you. I’m not sure if there was miscommunication higher up with my particular situation, but in general things start getting dicey if there’s a set direction you need to head towards and that direction suddenly gets shifted.

After I got fired, I talked to one of my mentors about the whole experience. This is a benefit of finding mentors who have been through everything in the industry way before you even got there: they have that experience that flows pretty easily from them.

After relaying this story, my friend immediately laughed and said, “yeah, that’s exactly the moment when they started the process to fire you”. I kinda shrugged it off and suggested it was a right-hand-meet-left kinda thing, or maybe he was reading it wrong. He replied no, that is exactly the kind of email he had sent in the past when he was firing someone at one of his companies, and it was also the kind of email he had received right before he was fired in the past, too.

Be wary of any sudden goalposts, really. I’ll mention later on about PIPs — performance improvement plans — and how they can be really helpful to employees as well as to employers, but in general if someone’s setting you up with specific new guidelines for you to follow, you should take it with a critical eye.

At this point things were turning a tad surprising. By February, the first time I received an email from my manager about all this, I hadn’t been involved with the company at all for two months through my sabbatical, and I hadn’t even talked to my manager in four months, ever since she had decided that 1:1s weren’t really valuable between her and me. This was well and fine with me, since I had been assigned to a bit of a catch-all team where none of its members worked together on anything, and I was pretty comfortable moving around the organization and working with others in any case.

I was in Colorado at the time, but agreed to meet up and have a video chat about things. When I jumped on the call, I noticed that — surprise! — someone from HR was on the call as well.

Turns out, HR doesn’t normally join calls for fun. Really, I’m not sure anyone joins video chats for fun. So this should have the first thing that tickled my spidey-sense, but I kinda just tucked it in the back of my mind since I didn’t really have time to consider things much while the call was going on.

At this point, I was feeling pretty good about life again; the time off had left me feeling pretty stoked about building things again, and I had a long list of a dozen things I was planning on shipping in my first month back on the job. The call turned fairly confrontational off the bat, though; my manager kept asking how I felt, I said I felt pretty great and wanted to get to work, but that didn’t seem to really to be the correct answer. Things took a turn south and we went back-and-forth about things. This led to her calling me an asshole twice (in front of HR, again, who didn’t seem to mind).

In hindsight, yeah, I was probably a bit of an asshole; I tend to clam up during bits of confrontation that I hadn’t thought through ahead of time, and most of my responses were pretty terse in the affirmative rather than offering a ton of detail about my thoughts.

After the conversation had ended on a fairly poor note, I thought things through some more and found it pretty weird to be in a position with a superior who was outwardly fairly hostile to me, and I made my first major mistake: I talked to HR.

I was on really good terms with the head of HR, so the next day I sent an email to her making my third written formal request in the prior six months or so to be moved off of my team and onto another team. I had some thoughts on where I’d rather see myself, but really, any other team at that point I would have been happy with; I had pretty close working relationships with all of the rest of the managers at the company. On top of that, the team I was currently on didn’t have any association with each other, so I figured it wouldn’t be a big deal to switch to another arbitrary team.

The head of HR was really great, and found the whole situation to be a bit baffling. We started talking about which teams might make sense, and I asked around to a couple people as to whether they would be happy with a new refugee (they were all thumbs-up on the idea). She agreed to talk to some of the higher-ups about things, and we’d probably arrange a sit-down in person when I came back in a few days to SF to sort out the details.

You

Don’t talk to HR.

This pains me to say. I’ve liked pretty much every person in HR at all the companies I’ve worked for; certainly we don’t want to view them as the enemy.

But you have to look to their motivations, and HR exists only to protect the company’s interests. Naturally you should aim to be cordial if HR comes knocking and wants to talk to you, but going out of your way to bring something to the attention of HR is a risk.

Unfortunately, this is especially important to consider if you’re in a marginalized community. Many women in our industry, for example, have gone to HR to report sexual harassment and promptly found that they were the one who got fired. Similar stories exist in the trans community and with people who have had to deal with racial issues.

Ultimately it’s up to you whether you think HR at your company can be trusted to be responsible with your complaint, but it also might be worthwhile to consider alternative options as well (i.e., speaking with a manager if you think they’d be a strength in the dispute, exploring legal or criminal recourse, and so on).

HR is definitely a friend. But not to you.

Company

Avoid surprises. I’ve talked with a lot of former employees over the last year, and the ones with the most painful stories usually stem from being unceremoniously dropped into their predicament.

From a corporate perspective, it’s always painful to lose employees — regardless of the manner in which the employee leaves the company. But it’s almost always going to be more painful for the former employee, too.

I was out at a conference overseas a few years back with a few coworkers. One of my coworkers received a notice that he was to sit down on a video chat with the person he was reporting to at the time. He was fretting about it given the situation was a bit sudden and out of the ordinary, but I tried to soothe his fears, joking that they wouldn’t fire him right before an international conference that he was representing the company at. Sure enough, they fired him. Shows what I really knew about this stuff.

Losing your job is already tough. Dealing with it without a lot of lead-up to consider your options is even harder.

One of the best ways to tackle this is with a performance improvement plan, or PIP. Instituting a PIP is relatively straightforward: you tell the employee that they’re not really where you’d like to see them and that they’re in danger of losing their job, but you set clear goals so that the employee gets the chance at turning things around.

This is typically viewed as the company covering their ass so when they fire you it’s justified, but really I view it as a mutual benefit: it’s crystal-clear to the employee as to what they need to do to change their status in the organization. Sometimes they just didn’t know they were a low performer. Sometimes there are other problems in their life that impacted their performance, and it’s great to get that communication out there. Sometimes someone’s really not up to snuff, but they can at least spend some time preparing themselves prior to being shown the door.

The point is: surprise firings are the worst types of firings. It’s better for the company and for the employee to both be clear as to what their mutual expectations are. Then they can happily move forward from there.

At this point, I finished up my trip and flew back to San Francisco. It was time to chat in person.

Fired

I was fired before I entered the room.

You’re not going to be happy here. We need to move you out of the company.

That was the first thing that was said to me in the meeting between me, the CEO, and the head of HR. Not even sure I had finished sitting down, but I only needed a glance at the faces to know what was in the pipeline for this meeting.

You’re not going to be happy here is a bullshit phrase, of course, but not one that I have a lot of problems with in hindsight. My happiness has no impact on the company — my output does — but I think it was a helpful euphemism, at least.

You

Chill. The first thing I’d advise if you find yourself in the hot seat is to just chill out. I did that reasonably well, I think, by nodding, laughing, and giving each person in the room a hug before splitting. It was a pretty reasonable break, and I got to have a long chat with the head of HR immediately afterwards where we shot the shit about everything for awhile.

You ever watch soccer (or football, for you hipster international folk that still refuse to call it by its original name)? Dude gets a yellow card, and more often than not what does he do? Yells at the ref. Same for any sport, really. How many times does the ref say ah shit, sorry buddy, totally got it wrong, let me grab that card back? It just doesn’t happen.

That’s where you are in this circumstance. You can’t argue yourself back into a job, so don’t try to. At this point, just consider yourself coasting. If it’s helpful to imagine you’re a tiny alien controlling your humanoid form from inside your head a la the tiny outworlder in Men in Black, go for it.

My friend’s going through a particularly gnarly three- or four-weeks of getting fired from a company right now (don’t ask; it’s a disaster). This is the same type of advice I gave them: don’t feel like you need to make any statements or sign any legal agreements or make any decisions whatsoever while you’re in the room or immediately outside of it. If there’s something that needs your immediate attention, so be it, but most reasonable companies are going to give you some time to collect your thoughts, come up with a plan, and enact it instead of forcing you to sign something at gunpoint.

Remember: even if you’re really shit professionally, you’ll probably only get fired what, every couple of years? If you’re an average person what, maybe once a lifetime? Depending on the experience of management, the person firing you may deal with this situation multiple times a year. They’re better at it than you are, and they’re far less stressed out about it. I was in pretty good spirits at the time, but looking back I certainly wasn’t necessarily in my normal mindset.

Emotionally compromised

You’re basically like new-badass-Spock in the Star Trek reboot: you have been emotionally compromised; please note that shit in the ship’s log.

I’m still not fully certain why I got the axe; it was never made explicit to me. I asked other managers and those on the highest level of leadership, and everyone seemed be as confused as I was.

My best guess is that it’s Tall Poppy Syndrome, a phrase I was unfamiliar with until an Aussie told me about it. (Everything worthwhile in life I’ve learned from an Australian, basically.) The tallest poppy gets cut first.

With that, I don’t mean that I’m particularly talented or anything like that; I mean that I was the most obvious advocate internally for certain viewpoints, given how I’ve talked externally about how the old GitHub worked. In Japanese the phrase apparently translates to The tallest nail gets the hammer, which I think works better for this particular situation, heh. I had on occasion mentioned internally my misgivings about the lack of movement happening on any product development, and additionally the increasing unhappiness of many employees due to some internal policy changes and company growth.

Improving the product and keeping people happy are pretty important in my eyes, but I had declined earlier requests to move towards the management side of things, though, so primarily I was fairly heads-down on building stuff at that point rather than leading the charge for a lot of change internally. So maybe it was something else entirely; I’m not sure. I’m left with a lot of guesses.

Company

Lockdown. The first thing to do after — or even while — someone is fired is to start locking down their access to everything. This is pretty standard to remove liability from any bad actors. Certainly the vast majority of people will never be a problem, but it’s also not insulting or anything from a former employee standpoint, either. (It’s preferred, really: if I’ve very recently been kicked out of a company, I’d really like to be removed from production access as soon as possible so I don’t even have to worry about accidentally breaking something after my tenure is finished, for example. It’s best for everyone.)

From a technical standpoint, you should automate the process of credential rolling as much as possible. All the API keys, passwords, user accounts, and other credentials should be regenerated and replaced in one fell swoop.

Automate this because, well, as you grow, more people are inherently going to leave your company, and streamlining this process is going to make it easier on everyone. No one gets up in the morning, jumps out of bed, throws open the curtains and yells out: OH GOODIE! I GET TO FIRE MORE PEOPLE TODAY AND CHANGE CONFIG VALUES FOR THE NEXT EIGHT HOURS! THANK THE MAKER!

Ideally this should be as close to a single console command or chat command as possible. If you’re following twelve-factor app standards, your config values should already be stored in the environment rather than tucked deep into code constants. Swap them out, and feel better about yourself while you have to perform a pretty dreary task.

Understand the implications of what you’re doing, though. I remember hearing a story from years back of someone getting let go from a company. Sure, that sucks, but what happened next was even worse: the firee had just received their photos back from their recent wedding, so they tossed them into their Dropbox. At the time, Dropbox didn’t really distinguish between personal and corporate accounts, and all the data was kind of mixed together. When the person was let go, the company removed access to the corporate Dropbox account, which makes complete sense, of course. Unfortunately that also deleted all their wedding photos. Basically like salt in an open wound. Dropbox has long since fixed this problem by better splitting up personal and business accounts, but it’s still a somewhat amusing story of what can go wrong if there’s not a deeper understanding of the implications of cutting off someone’s access.

Understand the real-world implications as well. Let’s take a purely hypothetical, can’t-possibly-have-happened-in-real-life example of this.

Does your company:

  • Give out RFID keyfobs instead of traditional metal keys in order to get into your office?
  • Does your office have multiple floors?
  • Do you disable the employee’s keyfob at the exact same time they’re getting fired?
  • Do you, for the sake of argument, also require keyfob access inside your building to access individual floors?
  • Is it possible — just possible at all, stay with me here — that the employee was fired on the third floor?
  • And is it possible that the employee would then go down to the second floor to collect their bag?
  • Is it at all possible that you’ve locked your newly-fired former employee INTO THE STAIRWELL, unable to enter the second floor, instead having to awkwardly text a friend they knew would be next to the door with a very unfortunate HI CAN YOU UNLOCK THE SECOND FLOOR DOOR FOR ME SINCE MY KEYFOB DOESN’T WORK PROBABLY BECAUSE I JUST GOT FIRED HA HA HA YEAH THAT’S A THING NOW WE SHOULD CHAT.

Totally hypothetical situation.

Yeah, totally was me. It was hilarious. I was laughing for a good three minutes while someone got up to grab the door.

Anyway, think about all of these implications. Particularly if the employee loses access to their corporate email account; many times services like healthcare, stock information, and payroll information may be tied to that email address, and that poses even more problems for the former employee.

This also underscores the benefit of keeping a cordial relationship between the company and the former employee. When I was fired, I found I still had access to a small handful of internal apps whose OAuth tokens weren’t getting rolled properly. I shot an email to the security team, so hopefully they were invalidated and taken care for future former employees.

Although now that I think about it, I still have access to the analytics for many of GitHub’s side properties; I’ve been unable to get a number of different people to pull the plug for me. I think instead I’ll just say it’s a clear indicator of the trust my former employer has in my relationship with them. :heart:

One last thing to add in this section. My friend Reg tweeted this recently:

I really like this sentiment a lot, and will keep it in mind when I’m in that position next. Occasionally you’ll see the odd person mention something about this over Twitter or something, and it’s clear that firing someone is a stressful process. But be careful who you vent that stress to — vent up the chain of command, not down — because do keep in mind that you’re still not the one suffering the most from all this.

Coworker

Determine the rationale. Once someone’s actually been fired, this is really your first opportunity as a coworker to have some involvement in the process. Certainly you’re not aiming to butt in and try to be the center of everything, here, but there’s some things you can keep in mind to help your former coworker, your company, and ultimately, yourself.

Determining the rationale I think is the natural first step. You’re no help to anyone if you get fired as well. And sometimes — but obviously not always — if someone you work with gets fired, it could pose problems for you too, particularly if you work on the same team.

Ask around. Your direct manager is a great place to start if you have a good relationship with them. You don’t necessarily need to invade the firee’s privacy and pry into every single detail, but I think it’s reasonable to ask if the project you’re working on is possibly going to undertake a restructuring, or if it might get killed, or any number of other things. Don’t look desperate, of course — OH MY GOD ARE WE ALL GOING TO GET SHITCANNED???? — but a respectful curiosity shouldn’t hurt in most healthy organizations.

Gossip is a potential next step. Everyone hates on gossip, true, but I think it can have its place for people who aren’t in management positions. Again, knowing every single detail isn’t really relevant to you, but getting the benchmark of people around you on your level can be helpful for you to judge your own position. It also might be helpful as a sort of mea culpa when you talk to your manager, as giving them a perspective from the boots on the ground, so to speak, might be beneficial for them when judging the overall health of the team.

Company

Be truthful internally. Jumping back to the employer’s side of things, just be sure to be truthful. Again, the privacy of your former employee’s experience is very important to keep, but how to talk about it to other employees can be pretty telling.

Be especially cautious when using phrases like mutually agreed. Very few departures are mutually-agreed upon. If they were thinking of leaving, there’s a good chance they’d have already left.

In my case, my former manager emailed her team and included this sentence:

We had a very honest and productive conversation with Zach this morning and decided it was best to part ways.

There certainly wasn’t any conversation, and the sentence implies that it was a mutual decision. She wasn’t even in the room, either, so the we is a bit suspect as well, ha.

In either case, I was already out the door, so it doesn’t bother me very much. But everyone in the rank-and-file are better-networked than you are as a manager, and communication flows pretty freely once an event happens. So be truthful now, otherwise you poison the well for future email announcements. Be a bit misleading today and everyone will look at you as being misleading in the future.

The last bit to consider is group firing: firing more than one person on the same day. This is a very strong signal, and it’s up to you as to what you’re trying to signal here. If you take a bunch of scattered clear under-performers and fire them all on the same day, then the signal might be that the company is cleaning up and is focused squarely on improving problems. If the decision appears rather arbitrary, you run the risk of signaling that firing people is also arbitrary, and your existing employees might be put in a pretty stressful situation when reflecting on their own jobs.

Firing is tough. If you’ve ever done it before you know it’s not necessarily just about the manager and the employee: it can impact a lot more people than that.

So, I was fired. I walked out of the room, got briefly locked inside the office stairwell, and then walked to grab my stuff.

After

What next?

It’s a tough question. At this point I was kind of on auto-pilot, with the notion of being fired not really settling out in my mind yet.

I went to where my stuff was and started chatting with my closer friends. (I wasn’t escorted out of the building or any of that silliness.)

I started seeing friendly faces walk by and say hi, since in many cases I hadn’t seen or talked to most of my coworkers in months, having never come back in an official capacity from my sabbatical. I immediately took to walking up to them, giving them a long, deeply uncomfortable and lingering hug, and then whispering in their ear: it was very nice working with you. also I just got fired. It was a pretty good troll given such short notice, all things considered. We all had a good laugh, and then people stuck around so they could watch me do it to someone else. By the end I had a good dozen or so people around chatting and avoiding work. A+++ time, would do again.

lol jesus just realized what I typed, god no, I’d probably avoid getting fired the next time, I mean. I’m just pretty dope at trolling is all I’m sayin’.

Egregious selfie of the author

Eventually I walked out of the office and starting heading towards tacos, where I was planning on drinking way too many margaritas with a dear friend who was still at the company (for the time being). Please note: tacos tend to solve all problems. By this point, the remote workers had all heard the news, so my phone started blowing up with text messages. I was still feeling pretty good about life, so I took this selfie and started sending it to people in lieu of going into a ton of detail with each person about my mental state.

In prepping this talk, I took a look at this selfie for the first time in quite a number of months and noticed I was wearing earbuds. Clearly I was listening to something as I strutted out of the office. Luckily I scrobble my music to Last.fm, so I can go back and look. So that’s how I found out what I was listening to:

Eponine

On My Own, as sung by Eponine in the award-winning musical Les Misérables. Shit you not. It’s like I’m some emo fourteen-year-old just discovering their first breakup or something. Nice work, Holman.

Shortly thereafter, I tweeted the aforementioned tweet:

Again, it’s pretty vague and didn’t address whether I had quit or I’d been fired. I was pretty far away from processing things. I think being evasive made some sense at the time.

I’ve been journaling every few days pretty regularly for a few years now, and it’s one of the best things I’ve ever done for myself. I definitely wrote a really long entry for myself that day. I went back and took a look while I was preparing this talk, and this section jumped out at me:

The weird part is how much this is about me. This is happening to me right now. I didn’t really expect it to feel so intimate, a kind of whoa, this is my experience right now and nobody else’s.

In hindsight, yeah, that’s absolutely one of the stronger feelings I still feel from everything. When you think about it, most of the experiences you have in life are shared with others: join a new job, share it with your new coworkers. Get married, share it with your new partner and your friends and family. Best I can tell, getting fired and dying are one of the few burdens that are yours and yours alone. I didn’t really anticipate what that would feel like ahead of time.

By later in the night, I was feeling pretty down. It was definitely a roller coaster of a day: text messages, tweets, margaritas, financial advisors, lawyers, introspective walks in the park. I didn’t necessarily think I’d be flying high for the rest of my life, but it didn’t really make the crash all that easier, either. And that experience has really matched my last year, really: some decent highs, some pretty dangerous lows. Five years being that deeply intertwined in a company is toeing a line, and I’ve been paying for it ever since.

Loose Ends

Good god, it really takes an awful amount of work in order to leave work.

There’s a number of immediate concerns you need to deal with:

  • Who owns your physical hardware? Is your computer owned by the company? Your phone? Any other devices? Do you need to wipe any devices, or pull personal data off of any of them?
  • Do you have any outstanding expenses to deal with? I had a conference to Australia in a few subsequent weeks that I had to deal with. I had told them that GitHub would pay for my expenses to attend, but I hadn’t booked that trip yet. Luckily it was no problem for GitHub to pick up the tab (I was still representing the company there, somewhat awkwardly), but it was still something else I needed to remember to handle right away.
  • How’s your healthcare situation, if you’re unfortunate enough to live in a country where healthcare Is A Thing. In the US, COBRA exists to help provide continuity of health insurance between jobs, and it should cover you during any gaps in your coverage. It was one more thing to have to worry about, although admittedly I was pleasantly surprised at how (relatively) easy using COBRA was; I was expecting to jump through some really horrible hoops.

The next thing to consider is severance pay. Each company tends to handle things differently here, and at least in the US, there’s not necessarily a good standard of what to expect in terms of post-termination terms and compensation.

There’s a lot of potential minefields involved in dealing with the separation agreement needed to agree upon severance, though.

Unfortunately I can’t go into much detail here other than say that we reached an equitable agreement, but it did take a considerable amount of time to get to that point.

One of the major general concerns when a worker leaves an American-based startup is the treatment of their stock options. A large part of equity compensation takes place in the form of ISOs, which offer favorable tax treatments in the long term.

Unfortunately, vested unexercised ISOs are capped at 90 days post-employment by law, meaning that they disappear in a puff of smoke once you reach that limit. This poses a problem in today’s anti-IPO startups who simultaneously reject secondary sales, which limit all of the options available for an employee to exercise their stock (the implications of which for an early employee might cost hundreds of thousands of dollars that they don’t have, excluding the corresponding tax hit as well).

Another possibility that’s quickly gaining steam lately is to convert those ISOs to NSOs at the 90 day mark and extend the option window to something longer like seven or ten years instead of a blistering 90 days. In my mind, companies who haven’t switched to a longer 90 day window are actively stealing from their employees; the employees have worked hard to vest their options over a period of years, but because of their participation in the company’s success they’re now unable to exercise their options.

I’ve talked a lot about this in greater length in my aptly-titled post, Fuck Your 90 Day Exercise Window, as well as started a listing of employee-friendly companies with extended exercise windows. Suffice to say, this is a pretty important aspect to me and was a big topic in the discussions surrounding my separation agreement.

I had been talking to various officials in leadership for a few months hammering out the details and had been under the impression that we had reached an agreement, but I was surprised to find out that wasn’t the case. I was informed 28 hours before my 90 day window closed that the agreement I had thought I had didn’t exist; it was then that I realized I had 28 hours to either come up with hundreds of thousands of dollars that I didn’t have to save half of my stock, or I could sign the agreement as-is and avoid losing half of my already-diminished stake. I opted to sign.

You

Get everything in writing. This also supports my earlier point of aiming to not do anything in the room while you’re getting fired; it allows you to take some time out and think things through once you have the legalese in front of you (and preferably in front of a lawyer).

I think it’s fully acceptable to stay on-the-record. So no phone calls, no meetings in person. Again, you’re up against people who have done this frequently in the past, and it’s a good chance these thoughts haven’t crossed your mind before.

A lot of it certainly might not even be malicious; I’d imagine a lot of people you chat with could be good friends who want to see you leave in good shape, but at the end of the day it’s really dicey to assume the company as a whole is deeply looking out for your interests. The only person looking out for your interests is you.

This also underlines the generally great advice of always knowing a good lawyer, a good accountant, and a good financial advisor. You don’t necessarily have to be currently engaged with a firm; just knowing who to ask for recommendations is a great start. If you can take some time and have some introductory calls with different firms ahead of time, that’s even better. The vast majority of legal and financial firms will be happy to take a quick introductory phone call with you free-of-charge to explain their value proposition. This is highly advantageous for you to do ahead of time so you don’t need to do this when you’re deep in the thick of a potential crisis.

All things considered, though, we did reach an agreement and I was officially free and clear of the company.

Life after

That brings us to the last few months and up to the present. I’ve spent the last year or so trying to sort out my life and my resulting depression. Shit sucks. Professionally I’ve done some consulting and private talks here and there, which have been tepidly interesting. I’ve also served in a formal advisory role to three startups, which I’ve really come to enjoy; after being so heads-down on a single problem for the last five years, it’s nice to get a fair amount of depth in multiple new problem spaces, some of which are new to me.

But I still haven’t found the next thing I’m really interested in, which just feeds into the whole cycle some more. For better or worse, that’ll be changing pretty quickly, since I’m pretty broke after working part-time and living in San Francisco for so long. Even though I helped move a company’s valuation almost two billion dollars, I haven’t made a dime from the company outside of making a pretty below-to-average salary. That’s after six years.

Think on that, kids, when you’re busting your ass day and night to strike it rich with your startup dreams.

Coworker

It’s cool to stay in touch. Something that’s kind of cracked me up lately is the sheer logistics behind keeping in touch with my former coworkers. On one hand, you lose out on your normal chat conversations, lunches, and in-person meetings with these colleagues. It’s just a human trait that it’s harder to keep these relationships up when they’re out of sight, out of mind.

Beyond that, though, when you’re out of the company you’re also out of the rolodex. You might not know someone’s phone number or personal email address anymore, for example. A large part of the time you, as a coworker, might be in a bit better position to reach out to a former colleague than they are to you, since you still have access to these infrastructures. It’s possible someone would be up for a chat, but the difficulty in doing so provides a bit of a barrier, so it’s fine to reach out and say hi sometimes! Even the worst corporate breakups that I’ve heard about are usually able to insulate between bad experiences with the company versus bad experiences with you, so you shouldn’t be too worried about that if you weren’t directly involved.

The one aspect about all of this that you might want to keep in mind that I’ve heard crop up again and again from a number of former employees is around the idea of conversational topics.

In some sense I think it’s natural for existing employees to vent to former employees that may have left on bad terms about the gossip that’s happening at the company. To take an example from my own experiences, I don’t think there’s anyone else on the planet that knows more dirt on GitHub than I do at this point, even including current employees. I’m certain I gave two to three times as many 1:1s than anyone else at the company in the subsequent months following my departure; I think I was a natural point of contact to many who were frustrated at some internal aspects of the company they were dealing with.

And that’s fine, to an extent; schadenfreude is a thing, and it can be helpful for awhile, for both parties. But man, it gets tiring, particularly when you’re not paid for it. Especially when you’re still suffering from feelings from it. It’s hard to move on when every day there’s something new to trigger it all over again.

So don’t be afraid to be cautious with what you say. If they’re up to hearing new dirt, so be it; if they’re a bit fried about it, chat about your new puppy instead. Everyone loves puppies.

One of the very bright points from all of this is the self-organized GitHub alumni network. Xubbers, we call ourselves. We have a private Facebook group and a private Slack room to talk about things. It’s really about 60% therapy, 20% shooting the shit just like the old days, and 20% networking and supporting each other as we move forward in our new careers apart.

I can’t underline how much I’ve appreciated this group. In the past I’ve kept in contact with coworkers from previous points of employment, but I hadn’t worked somewhere with enough former employees to necessarily warrant a full alumni group.

Highly recommend pulling a group like this together for your own company. On a long enough timescale, you’re all going to join our ranks anyway. Unless you die first. Then we’ll mount your head on the wall like in a private hunter’s club or something. “The one that almost got away”, we’ll call it.

Xubber meetup

In some sense, I think alumni really continue the culture of the company, independent of what changes may or may not befall the company itself.

One of my favorite stories about all this lately is from Parse. Unfortunately, the circumstances around it aren’t super happy: after being acquired by Facebook, Parse ultimately was killed off last month.

The Parse alumni, though, got together last month to give their beloved company a proper send-off:

No funeral would be complete, though, without a cake. (I’m stretching the metaphor here, but that’s okay, just roll with it.) Parse’s take on the cake involved an upside-down Facebook “like” button, complete with blood:

The most important part of a company is the lasting mark they leave on the world. That mark is almost always the people. Chances are, your people aren’t going to be at your company forever. You want them to move on and do great things. You want them to carry with them the best parts of your culture on to new challenges, new companies, and new approaches.

Once you see that happening, then you can be satisfied with the job you’ve done.

Company

Cultivate the relationship with your alumni. Immediately after parting ways with an employee, there will be a number of important aspects that will require a lot of communication: healthcare, taxes, stock, and so on. So that type of follow-on communication is important to keep in mind.

There are plenty of longer-term relationships to keep in mind as well, though. Things like help with recruiting referrals, potential professional relationships with the former employee’s new company, and other bidirectional ways to help each other in general. It’s good to support those lines of communication.

One way to help this along is to simply provide an obvious point of contact. Having something like an alumni@ email address available is a huge benefit. Otherwise it becomes a smorgasbord of playing guess-the-email-account, which causes problems for your current employees as well. Just set up an alumni@ email alias to forward emails from and keep it up-to-date through any changes in your organizational side of things.

The last thing to consider is that your alumni are a truly fantastic source of recruiting talent. Most employment terminations are either voluntary (i.e., quitting) or at least on fairly good terms. There are plenty of reasons to leave a job for purposes unrelated to your overall opinion of the company: maybe you’re moving to a different city, or you’re taking a break from work to focus on your kids, or you simply want to try something new. You can be an advocate for your former employer without having to continue your tenure there yourself.

And that’s a good thing. Everyone wants to be the one who helps their friend find a new job. That’s one of the best things you can do for someone. If the company treated them well, they can treat the company well by helping to staff it with good people.

If the company has a poor relationship with former employees, however, one can expect that relationship to go both ways. And nothing is a stronger signal for prospective new hires than to talk to former employees and get their thoughts on the situation.

Next

It’s not your company. If you don’t own the company, the company owns you.

That’s really been a hard lesson for me. I was pretty wrapped up in working there. It’s a broader concept, really, shoved down our throats in the tech industry. Work long hours and move fast. Here, try on this company hoodie. Have this catered lunch so you don’t have to go out into the real world. This is your new home. The industry is replete with this stuff.

One of my friends took an interesting perspective:

I always try to leave on a high note. Because once you’re there, you’re never going to hit that peak again.

What she was getting at is that I think you’ll know. You’ll know the difference between doing far and away your best work, and doing work that is still good, but just nominally better than what you’ve been doing. Once you catch yourself adjusting to that incremental progression… maybe it’s time to leave, to change things up. Just thought that was interesting.

One of my favorite conversations I’ve had recently was with Ron Johnson. Ron was in charge of rolling out the Apple Store: everything from the Genius Bar to the physical setup to how the staff operated. He eventually left Apple and became the CEO at JC Penny, one of the large stalwart department stores in the United States. Depending on who you ask, he either revolutionized what department stores could be but ran out of time to see the changes bear fruit, or seriously jeopardized JC Penny’s relationship with its customers by putting them through some new changes.

In either case, there had been some discussions internally and he had agreed to resign. A few days later, the board went ahead and very publicly fired him instead.

We chatted about this, and he said something that I really think helped clarify my opinion on everything:

There’s nothing wrong with moving along… regardless of whether it is self-driven or company-driven. Maybe we need new language… right now it’s either we resign or get fired.

Maybe there’s a third concept which is “next”.

Maybe we should simply recognize it’s time for next.

I like that sentiment.

Firing people is a normal function in a healthy, growing company. The company you start at might end up very distinctly different by the time you leave it. Or you might be the one who does the changing. Life’s too nuanced to make these blanket assumptions when we hear about someone getting fired.

Talk about it. If not publicly, then talk openly with your friends and family about things. I don’t know much, but I do know we can’t start fixing and improving this process if we continue to push the discussions to dark alleyways of our minds.

When I finished this talk in the UK last week, I was kind of nervous about how many in the audience could really identify with aspects that I was describing. Shortly after the conference finished up we went to the conference after-party and I was showered with story after story of bad experiences, good experiences, and just overall experiences, from people who hadn’t really been able to talk frankly about these topics before. It was pretty humbling. So many people have stories.

Thanks for reading my story.

What’s next?

News stories from Tuesday 01 March, 2016

Favicon for Zach Holman 02:00 How to Deploy Software » Post from Zach Holman Visit off-site link

How to
Deploy Software

Make your team’s deploys as boring as hell and stop stressing about it.

Let's talk deployment

Whenever you make a change to your codebase, there's always going to be a risk that you're about to break something.

No one likes downtime, no one likes cranky users, and no one enjoys angry managers. So the act of deploying new code to production tends to be a pretty stressful process.

It doesn't have to be as stressful, though. There's one phrase I'm going to be reiterating over and over throughout this whole piece:

Your deploys should be as boring, straightforward, and stress-free as possible.

Deploying major new features to production should be as easy as starting a flamewar on Hacker News about spaces versus tabs. They should be easy for new employees to understand, they should be defensive towards errors, and they should be well-tested far before the first end-user ever sees a line of new code.

This is a long — sorry not sorry! — written piece specifically about the high-level aspects of deployment: collaboration, safety, and pace. There's plenty to be said for the low-level aspects as well, but those are harder to generalize across languages and, to be honest, a lot closer to being solved than the high-level process aspects. I love talking about how teams work together, and deployment is one of the most critical parts of working with other people. I think it's worth your time to evaluate how your team is faring, from time to time.

A lot of this piece stems from both my experiences during my five-year tenure at GitHub and during my last year of advising and consulting with a whole slew of tech companies big and small, with an emphasis on improving their deployment workflows (which have ranged from "pretty respectable" to "I think the servers must literally be on fire right now"). In particular, one of the startups I'm advising is Dockbit, whose product is squarely aimed at collaborating on deploys, and much of this piece stems from conversations I've had with their team. There's so many different parts of the puzzle that I thought it'd be helpful to get it written down.

I'm indebted to some friends from different companies who gave this a look-over and helped shed some light on their respective deploy perspectives: Corey Donohoe (Heroku), Jesse Toth (GitHub), Aman Gupta (GitHub), and Paul Betts (Slack). I continually found it amusing how the different companies might have taken different approaches but generally all focused on the same underlying aspects of collaboration, risk, and caution. I think there's something universal here.

Anyway, this is a long intro and for that I'd apologize, but this whole goddamn writeup is going to be long anyway, so deal with it, lol.

Table of Contents

  1. Goals

    Aren't deploys a solved problem?

  2. Prepare

    Start prepping for the deploy by thinking about testing, feature flags, and your general code collaboration approach.

  3. Branch

    Branching your code is really the fundamental part of deploying. You're segregating any possible unintended consequences of the new code you're deploying. Start thinking about different approaches involved with branch deploys, auto deploys on master, and blue/green deploys.

  4. Control

    The meat of deploys. How can you control the code that gets released? Deal with different permissions structures around deployment and merges, develop an audit trail of all your deploys, and keep everything orderly with deploy locks and deploy queues.

  5. Monitor

    Cool, so your code's out in the wild. Now you can fret about the different monitoring aspects of your deploy, gathering metrics to prove your deploy, and ultimately making the decision as to whether or not to roll back your changes.

  6. Conclusion

    "What did we learn, Palmer?"
    "I don't know, sir."
    "I don't fuckin' know either. I guess we learned not to do it again."
    "Yes, sir."

How to Deploy Software was originally published on March 1, 2016.

Goals

Aren't deploys a solved problem?

If you’re talking about the process of taking lines of code and transferring them onto a different server, then yeah, things are pretty solved and are pretty boring. You’ve got Capistrano in Ruby, Fabric in Python, Shipit in Node, all of AWS, and hell, even FTP is going to stick around for probably another few centuries. So tools aren’t really a problem right now.

So if we have pretty good tooling at this point, why do deploys go wrong? Why do people ship bugs at all? Why is there downtime? We’re all perfect programmers with perfect code, dammit.

Obviously things happen that you didn’t quite anticipate. And that’s where I think deployment is such an interesting area for small- to medium-sized companies to focus on. Very few areas will give you a bigger bang for your buck. Can you build processes into your workflow that anticipate these problems early? Can you use different tooling to help collaborate on your deploys easier?

This isn't a tooling problem; this is a process problem.

The vast, vast majority of startups I've talked to the last few years just don't have a good handle on what a "good" deployment workflow looks like from an organizational perspective.

You don't need release managers, you don't need special deploy days, you don't need all hands on deck for every single deploy. You just need to take some smart approaches.

Prepare

Start with a good foundation

You've got to walk before you run. I think there's a trendy aspect of startups out there that all want to get on the coolest new deployment tooling, but when you pop in and look at their process they're spending 80% of their time futzing with the basics. If they were to streamline that first, everything else would fall in place a lot quicker.

Tests

Testing is the easiest place with which to start. It's not necessarily part of the literal deployment process, but it has a tremendous impact on it.

There's a lot of tricks that depend on your language or your platform or your framework, but as general advice: test your code, and speed those tests up.

My favorite quote about this was written by Ryan Tomayko in GitHub's internal testing docs:

We can make good tests run fast but we can't make fast tests be good.

So start with a good foundation: have good tests. Don't skimp out on this, because it impacts everything else down the line.

Once you start having a quality test suite that you can rely upon, though, it's time to start throwing money at the problem. If you have any sort of revenue or funding behind your team, almost the number one area you should spend money on is whatever you run your tests on. If you use something like Travis CI or CircleCI, run parallel builds if you can and double whatever you're spending today. If you run on dedicated hardware, buy a huge server.

The amount of benefit I've seen companies gain by moving to a faster test suite is one of the most important productivity benefits you can earn, simply because it impacts iteration feedback cycles, time to deploy, developer happiness, and inertia. Throw money at the problem: servers are cheap, developers are not.

I made an informal Twitter poll asking my followers just how fast their tests suite ran. Granted, it's hard to account for microservices, language variation, the surprising amount of people who didn't have any tests at all, and full-stack vs quicker unit tests, but it still became pretty clear that most people are going to be waiting at least five minutes after a push to see the build status:

How fast should fast really be? GitHub's tests generally ran within 2-3 minutes while I was there. We didn't have a lot of integration tests, which allowed for relatively quick test runs, but in general the faster you can run them the faster you're going to have that feedback loop for your developers.

There are a lot of projects around aimed at helping parallelize your builds. There's parallel_tests and test-queue in Ruby, for example. There's a good chance you'll need to write your tests differently if your tests aren't yet fully independent from each other, but that's really something you should be aiming to do in either case.

Feature Flags

The other aspect of all this is to start looking at your code and transitioning it to support multiple deployed codepaths at once.

Again, our goal is that your deploys should be as boring, straightforward, and stress-free as possible. The natural stress point of deploying any new code is running into problems you can't foresee, and you ultimately impact user behavior (i.e., they experience downtime and bugs). Bad code is going to end up getting deployed even if you have the best programmers in the universe. Whether that bad code impacts 100% of users or just one user is what's important.

One easy way to handle this is with feature flags. Feature flags have been around since, well, technically since the if statement was invented, but the first time I remember really hearing about a company's usage of feature flags was Flickr's 2009 post, Flipping Out.

These allow us to turn on features that we are actively developing without being affected by the changes other developers are making. It also lets us turn individual features on and off for testing.

Having features in production that only you can see, or only your team can see, or all of your employees can see provides for two things: you can test code in the real world with real data and make sure things work and "feel right", and you can get real benchmarks as to the performance and risk involved if the feature got rolled out to the general population of all your users.

The huge benefit of all of this means that when you're ready to deploy your new feature, all you have to do is flip one line to true and everyone sees the new code paths. It makes that typically-scary new release deploy as boring, straightforward, and stress-free as possible.

Provably-correct deploys

As an additional step, feature flags provide a great way to prove that the code you're about to deploy won't have adverse impacts on performance and reliability. There's been a number of new tools and behaviors in recent years that help you do this.

I wrote a lot about this a couple years back in my companion written piece to my talk, Move Fast and Break Nothing. The gist of it is to run both codepaths of the feature flag in production and only return the results of the legacy code, collect statistics on both codepaths, and be able to generate graphs and statistical data on whether the code you're introducing to production matches the behavior of the code you're replacing. Once you have that data, you can be sure you won't break anything. Deploys become boring, straightforward, and stress-free.

Move Fast Break Nothing screenshot

GitHub open-sourced a Ruby library called Scientist to help abstract a lot of this away. The library's being ported to most popular languages at this point, so it might be worth your time to look into this if you're interested.

The other leg of all of this is percentage rollout. Once you're pretty confident that the code you're deploying is accurate, it's still prudent to only roll it out to a small percentage of users first to double-check and triple-check nothing unforeseen is going to break. It's better to break things for 5% of users instead of 100%.

There's plenty of libraries that aim to help out with this, ranging from Rollout in Ruby, Togglz in Java, fflip in JavaScript, and many others. There's also startups tackling this problem too, like LaunchDarkly.

It's also worth noting that this doesn't have to be a web-only thing. Native apps can benefit from this exact behavior too. Take a peek at GroundControl for a library that handles this behavior in iOS.


Feeling good with how you're building your code out? Great. Now that we got that out of the way, we can start talking about deploys.

Branch

Organize with branches

A lot of the organizational problems surrounding deployment stems from a lack of communication between the person deploying new code and the rest of the people who work on the app with her. You want everyone to know the full scope of changes you're pushing, and you want to avoid stepping on anyone else's toes while you do it.

There's a few interesting behaviors that can be used to help with this, and they all depend on the simplest unit of deployment: the branch.

Code branches

By "branch", I mean a branch in Git, or Mercurial, or whatever you happen to be using for version control. Cut a branch early, work on it, and push it up to your preferred code host (GitLab, Bitbucket, etc).

You should also be using pull requests, merge requests, or other code review to keep track of discussion on the code you're introducing. Deployments need to be collaborative, and using code review is a big part of that. We'll touch on pull requests in a bit more detail later in this piece.

Code Review

The topic of code review is long, complicated, and pretty specific to your organization and your risk profile. I think there's a couple important areas common to all organizations to consider, though:

  • Your branch is your responsibility. The companies I've seen who have tended to be more successful have all had this idea that the ultimate responsibility of the code that gets deployed falls upon the person or people who wrote that code. They don't throw code over the wall to some special person with deploy powers or testing powers and then get up and go to lunch. Those people certainly should be involved in the process of code review, but the most important part of all of this is that you are responsible for your code. If it breaks, you fix it… not your poor ops team. So don't break it.

  • Start reviews early and often. You don't need to finish a branch before you can request comments on it. If you can open a code review with imaginary code to gauge interest in the interface, for example, those twenty minutes spent doing that and getting told "no, let's not do this" is far preferable than blowing two weeks on that full implementation instead.

  • Someone needs to review. How you do this can depend on the organization, but certainly getting another pair of eyes on code can be really helpful. For more structured companies, you might want to explicitly assign people to the review and demand they review it before it goes out. For less structured companies, you could mention different teams to see who's most readily available to help you out. In either end of the spectrum, you're setting expectations that someone needs to lend you a hand before storming off and deploying code solo.

Branch and deploy pacing

There's an old joke that's been passed around from time to time about code review. Whenever you open a code review on a branch with six lines of code, you're more likely to get a lot of teammates dropping in and picking apart those six lines left and right. But when you push a branch that you've been working on for weeks, you'll usually just get people commenting with a quick 👍🏼 looks good to me!

Basically, developers are usually just a bunch of goddamn lazy trolls.

You can use that to your advantage, though: build software using quick, tiny branches and pull requests. Make them small enough to where it's easy for someone to drop in and review your pull in a couple minutes or less. If you build massive branches, it will take a massive amount of time for someone else to review your work, and that leads to a general slow-down with the pace of development.

Confused at how to make everything so small? This is where those feature flags from earlier come into play. When my team of three rebuilt GitHub Issues in 2014, we had shipped probably hundreds of tiny pull requests to production behind a feature flag that only we could see. We deployed a lot of partially-built components before they were "perfect". It made it a lot easier to review code as it was going out, and it made it quicker to build and see the new product in a real-world environment.

You want to deploy quickly and often. A team of ten could probably deploy at least 7-15 branches a day without too much fretting. Again, the smaller the diff, the more boring, straightforward, and stress-free your deploys become.

Branch deploys

When you're ready to deploy your new code, you should always deploy your branch before merging. Always.

View your entire repository as a record of fact. Whatever you have on your master branch (or whatever you've changed your default branch to be) should be noted as being the absolute reflection of what is on production. In other words, you can always be sure that your master branch is "good" and is a known state where the software isn't breaking.

Branches are the question. If you merge your branch first into master and then deploy master, you no longer have an easy way to determining what your good, known state is without doing an icky rollback in version control. It's not necessarily rocket science to do, but if you deploy something that breaks the site, the last thing you want to do is have to think about anything. You just want an easy out.

This is why it's important that your deploy tooling allows you to deploy your branch to production first. Once you're sure that your performance hasn't suffered, there's no stability issues, and your feature is working as intended, then you can merge it. The whole point of having this process is not for when things work, it's when things don't work. And when things don't work, the solution is boring, straightforward, and stress-free: you redeploy master. That's it. You're back to your known "good" state.

Auto-deploys

Part of all that is to have a stronger idea of what your "known state" is. The easiest way of doing that is to have a simple rule that's never broken:

Unless you're testing a branch, whatever is deployed to production is always reflected by the master branch.

The easiest way I've seen to handle this is to just always auto-deploy the master branch if it's changed. It's a pretty simple ruleset to remember, and it encourages people to make branches for all but the most risk-free commits.

There's a number of features in tooling that will help you do this. If you're on a platform like Heroku, they might have an option that lets you automatically deploy new versions on specific branches. CI providers like Travis CI also will allow auto deploys on build success. And self-hosted tools like Heaven and hubot-deploy — tools we'll talk about in greater detail in the next section — will help you manage this as well.

Auto-deploys are also helpful when you do merge the branch you're working on into master. Your tooling should pick up a new revision and deploy the site again. Even though the content of the software isn't changing (you're effectively redeploying the same codebase), the SHA-1 does change, which makes it more explicit as to what the current known state of production is (which again, just reaffirms that the master branch is the known state).

Blue-green deploys

Martin Fowler has pushed this idea of blue-green deployment since his 2010 article (which is definitely worth a read). In it, Fowler talks about the concept of using two identical production environments, which he calls "blue" and "green". Blue might be the "live" production environment, and green might be the idle production environment. You can then deploy to green, verify that everything is working as intended, and make a seamless cutover from blue to green. Production gains the new code without a lot of risk.

One of the challenges with automating deployment is the cut-over itself, taking software from the final stage of testing to live production.

This is a pretty powerful idea, and it's become even more powerful with the growing popularity of virtualization, containers, and generally having environments that can be easily thrown away and forgotten. Instead of having a simple blue/green deployment, you can spin up production environments for basically everything in the visual light spectrum.

There's a multitude of reasons behind doing this, from having disaster recovery available to having additional time to test critical features before users see them, but my favorite is the additional ability to play with new code.

Playing with new code ends up being pretty important in the product development cycle. Certainly a lot of problems should be caught earlier in code review or through automated testing, but if you're trying to do real product work, it's sometimes hard to predict how something will feel until you've tried it out for an extended period of time with real data. This is why blue-green deploys in production are more important than having a simple staging server whose data might be stale or completely fabricated.

What's more, if you have a specific environment that you've spun up with your code deployed to it, you can start bringing different stakeholders on board earlier in the process. Not everyone has the technical chops to pull your code down on their machine and spin your code up locally — and nor should they! If you can show your new live screen to someone in the billing department, for example, they can give you some realistic feedback on it prior to it going out live to the whole company. That can catch a ton of bugs and problems early on.

Heroku Pipelines

Whether or not you use Heroku, take a look at how they've been building out their concept of "Review Apps" in their ecosystem: apps get deployed straight from a pull request and can be immediately played with in the real world instead of just being viewed through screenshots or long-winded "this is what it might work like in the future" paragraphs. Get more people involved early before you have a chance to inconvenience them with bad product later on.

Control

Controlling the deployment process

Look, I'm totally the hippie liberal yuppie when it comes organizational manners in a startup: I believe strongly in developer autonomy, a bottom-up approach to product development, and generally will side with the employee rather than management. I think it makes for happier employees and better product. But with deployment, well, it's a pretty important, all-or-nothing process to get right. So I think adding some control around the deployment process makes a lot of sense.

Luckily, deployment tooling is an area where adding restrictions ends up freeing teammates up from stress, so if you do it right it's going to be a huge, huge benefit instead of what people might traditionally think of as a blocker. In other words, your process should facilitate work getting done, not get in the way of work.

Audit trails

I'm kind of surprised at how many startups I've seen unable to quickly bring up an audit log of deployments. There might be some sort of papertrail in a chat room transcript somewhere, but it's not something that is readily accessible when you need it.

The benefit of some type of audit trail for your deployments is basically what you'd expect: you'd be able to find out who deployed what to where and when. Every now and then you'll run into problems that don't manifest themselves until hours, days, or weeks after deployment, and being able to jump back and tie it to a specific code change can save you a lot of time.

A lot of services will generate these types of deployment listings fairly trivially for you. Amazon CodeDeploy and Dockbit, for example, have a lot of tooling around deploys in general but also serves as a nice trail of what happened when. GitHub's excellent Deployment API is a nice way to integrate with your external systems while still plugging deploy status directly into Pull Requests:

GitHub's deployment API

If you're playing on expert mode, plug your deployments and deployment times into one of the many, many time series databases and services like InfluxDB, Grafana, Librato, or Graphite. The ability to compare any given metric and layer deployment metrics on top of it is incredibly powerful: seeing a gradual increase of exceptions starting two hours ago might be curious at first, but not if you see an obvious deploy happen right at that time, too.

Deploy locking

Once you reach the point of having more than one person in a codebase, you're naturally going to have problems if multiple people try to deploy different code at once. While it's certainly possible to have multiple branches deployed to production at once — and it's advisable, as you grow past a certain point — you do need to have the tooling set up to deal with those deploys. Deploy locking is the first thing to take a look at.

Deploy locking is basically what you'd expect it to be: locking production so that only one person can deploy code at a time. There's many ways to do this, but the important part is that you make this visible.

The simplest way to achieve this visibility is through chat. A common pattern might be to set up deploy commands that simultaneously lock production like:

/deploy <app>/<branch> to <environment>

i.e.,

/deploy api/new-permissions to production

This makes it clear to everyone else in chat that you're deploying. I've seen a few companies hop in Slack and mention everyone in the Slack deploy room with @here I'm deploying […]!. I think that's unnecessary, and only serves to distract your coworkers. By just tossing it in the room you'll be visible enough. If it's been awhile since a deploy and it's not immediately obvious if production is being used, you can add an additional chat command that returns the current state of production.

There's a number of pretty easy ways to plug this type of workflow into your chat. Dockbit has a Slack integration that adds deploy support to different rooms. There's also an open source option called SlashDeploy that integrates GitHub Deployments with Slack and gives you this workflow as well (as well as handling other aspects like locking).

Another possibility that I've seen is to build web tooling around all of this. Slack has a custom internal app that provides a visual interface to deployment. Pinterest has open sourced their web-based deployment system. You can take the idea of locking to many different forms; it just depends on what's most impactful for your team.

Once a deploy's branch has been merged to master, production should automatically unlock for the next person to use.

There's a certain amount of decorum required while locking production. Certainly you don't want people to wait to deploy while a careless programmer forgot they left production locked. Automatically unlocking on a merge to master is helpful, and you can also set up periodic reminders to mention the deployer if the environment had been locked for longer than 10 minutes, for instance. The idea is to shit and get off the pot as soon as possible.

Deploy queueing

Once you have a lot of deployment locks happening and you have a lot of people on board deploying, you're obviously going to have some deploy contention. For that, draw from your deepest resolve of Britishness inside of you, and form a queue.

A deploy queue has a couple parts: 1) if there's a wait, add your name to the end of the list, and 2) allow for people to cut the line (sometimes Really Important Deploys Need To Happen Right This Minute and you need to allow for that).

The only problem with deploy queueing is having too many people queued to deploy. GitHub's been facing this internally the last year or so; come Monday when everybody wants to deploy their changes, the list of those looking to deploy can be an hour or more long. I'm not particularly a microservices advocate, but I think deploy queues specifically see a nice benefit if you're able to split things off from a majestic monolith.

Permissions

There's a number of methods to help restrict who can deploy and how someone can deploy.

2FA is one option. Hopefully your employee's chat account won't get popped, and hopefully they have other security measures turned on their machine (full disk encryption, strong passwords, etc.). But for a little more peace of mind you can require a 2FA process to deploy.

You might get 2FA from your chat provider already. Campfire and Slack, for example, both support 2FA. If you want it to happen every time you deploy, however, you can build a challenge/response process into the process. Heroku and Basecamp both have a process like that internally, for instance.

Another possibility to handle the who side of permissions is to investigate what I tend to call, "riding shotgun". I've seen a number of companies who have either informal or formal processes or tooling for ensuring that at least one senior developer is involved in every deploy. There's no reason you can't build out a 2FA-style process like that into a chat client, for example, requiring both the deployer and the senior developer that's riding shotgun to confirm that code can go out.

Monitor

Admire and check your work

Once you've got your code deployed, it's time to verify that what you just did actually did what you did intend it to do.

Check the playbook

All deploys should really hit the exact same game plan each time, no matter if it's a frontend change or a backend change or anything else. You're going to want to check to see if the site is still up, if the performance took a sudden turn for the worse, if error rates started elevating, or if there's an influx of new support issues. It's to your advantage to streamline that game plan.

If you have multiple sources of information for all of these aspects, try putting a link to each of these dashboards in your final deploy confirmation in chat, for example. That'll remind everyone every time to look and verify they're not impacting any metrics negatively.

Ideally, this should all be drawn from one source. Then it's easier to direct a new employee, for example, towards the important metrics to look at while making their first deploy. Pinterest's Teletraan, for example, has all of this in one interface.

Metrics

There's a number of metrics you can collect and compare that will help you determine whether you just made a successful deploy.

The most obvious, of course, is the general error rate. Has it dramatically shot up? If so, you probably should redeploy master and go ahead and fix those problems. You can automate a lot of this, and even automate the redeploy if the error rate crosses a certain threshold. Again, if you assume the master branch is always a known state you can roll back to, it makes it much easier to automate auto-rollbacks if you trigger a slew of exceptions right after deploy.

The deployments themselves are interesting metrics to keep on-hand as well. Zooming out over the last year or so can help give you a good example of whether you're scaling the development pace up, or if it's clear that there's some problems and things are slowing down. You can also take a step further and collect metrics on who's doing the deploying and, though I haven't heard of anyone do this explicitly yet, tie error rates back to deployer and develop a good measurement of who are reliable deployers on the team.

Post-deploy cleanup

The final bit of housework that's required is the cleanup.

The slightly aggressively-titled Feature Toggles are one of the worst kinds of Technical Debt talks a bit about this. If you're building things with feature flags and staff deployments, you run the risk of complicating the long-term sustainability of your codebase:

The plumbing and scaffolding logic to support branching in code becomes a nasty form of technical debt, from the moment each feature switch is introduced. Feature flags make the code more fragile and brittle, harder to test, harder to understand and maintain, harder to support, and less secure.

You don't need to do this right after a deploy; if you have a bigger feature or bugfix that needs to go out, you'll want to spend your time monitoring metrics instead of immediately deleting code. You should do it at some point after the deploy, though. If you have a large release, you can make it part of your shipping checklist to come back and remove code maybe a day or a week after it's gone out. One approach I liked to take was to prepare two pull requests: one that toggles the feature flag (i.e., ships the feature to everyone), and one that cleans up and removes all the excess code you introduced. When I'm sure that I haven't broken anything and it looks good, I can just merge the cleanup pull request later without a lot of thinking or development.

You should celebrate this internally, too: it's the final sign that your coworker has successfully finished what they were working on. And everyone likes it when a diff is almost entirely red. Removing code is fun.

Deleted branch

You can also delete the branch when you're done with it, too. There's nothing wrong with deleting branches when you're done with them. If you're using GitHub's pull requests, for example, you can always restore a deleted branch, so you'll benefit from having it cleared out of your branch list but you won't actually lose any data. This step can also be automated, too: periodically run a script that looks for stale branches that have been merged into master, and then delete those branches.

Neato

The whole ballgame

I only get emotional about two things: a moving photo of a Golden Retriever leaning with her best friend on top of a hill overlooking an ocean looking towards a beautiful sunset, and deployment workflows. The reason I care so much about this stuff is because I really do think it's a critical part of the whole ballgame. At the end of the day, I care about two things: how my coworkers are feeling, and how good the product I'm working on is. Everything else stems from those two aspects for me.

Deployments can cause stress and frustration, particularly if your company's pace of development is sluggish. It also can slow down and prevent you from getting features and fixes out to your users.

I think it's worthwhile to think about this, and worthwhile to improve your own workflows. Spend some time and get your deploys to be as boring, straightforward, and stress-free as possible. It'll pay off.

Written by Zach Holman. Thanks for reading.

If you liked this, you might like some of the other things I've written. If you didn't like this, well, they're not all winners.

I also do some consulting about all of this stuff as well if your company's looking for help.

Did reading this leave you with questions, or do you have anything you'd like to talk about? Feel free to drop by my ask-me-anything repository on GitHub and file a new issue so we can chat about it in the open with other people in the community.

I hope we eventually domesticate sea otters.

News stories from Tuesday 23 February, 2016

Favicon for Kopozky 15:53 Problems » Post from Kopozky Visit off-site link

Comic strip: “Problems”


News stories from Tuesday 16 February, 2016

Favicon for Doctrine Project 02:00 Doctrine MongoDB ODM release 1.0.5 » Post from Doctrine Project Visit off-site link

Doctrine MongoDB ODM release 1.0.5

We are happy to announce the immediate availability of Doctrine MongoDB ODM 1.0.5.

Bug fixes in this release

Notable fixes may be found in the changelog. A full list of issues and pull requests included in this release may be found in the 1.0.5 milestone.

Installation

You can install the latest version using the following composer.json definitions:

{
    "require": {
        "doctrine/mongodb-odm": "^1.0.5"
    }
}

Doctrine MongoDB ODM and PHP 7

While ODM still relies on legacy MongoDB driver (i.e. ext-mongo) and no dates are scheduled for the 2.0 release, it is possible to run ODM’s development branch with the new MongoDB driver (i.e. ext-mongodb) on PHP 7 and HHVM! (see: this tweet) The new driver should be properly supported once we release versions 1.1 and 1.3 of the ODM and underlying Doctrine MongoDB library, respectively. This is all possible thanks to our Andreas Braun’s (@alcaeus) work on mongo-php-adapter which implements ext-mongo's API atop ext-mongodb and mongodb-php-library.” If you can’t wait to give ODM a test flight on PHP 7, now is the time! Also, if you happen to meet Andreas be sure to buy him a beer :)

News stories from Tuesday 02 February, 2016

Favicon for Doctrine Project 02:00 DoctrineModule 1.0.0 we have a stable release » Post from Doctrine Project Visit off-site link

DoctrineModule 1.0.0 we have a stable release

We are happy to announce the first stable release for DoctrineModule! 1.0.0 is ready to go after a couple of years of work.

The “Initial Commit” dates back to the date Oct 22, 2011 after 4 years, we are ready.

Thanks at all for yours contributions!

Update your composer configuration to use the stable version of this project.

Changes since 0.10.0

This is a list of issues resolved in 1.0.0 since 0.10.0:

  • [#523] Remove deprecated api call from test
  • [#547] Allow for the use of ZendCacheServiceStorageCacheAbstractServiceFactory

Please report any issues you may have with the update on the mailing list or on GitHub.

Remember to read our documentation and improve it with your knowledge.

News stories from Thursday 28 January, 2016

Favicon for Zach Holman 02:00 Startup Interviewing is Fucked » Post from Zach Holman Visit off-site link

Silicon Valley is full of startups who fetishize the candidate that comes into the interview, answers a few clever fantasy coding challenges, and ultimately ends up the award-winning hire that will surely implement the elusive algorithm that will herald a new era of profitability for the fledging VC-backed company.

Most startups have zero users and are a glimmer of the successful business they might wind up being some day. But we’re still romanticizing the idea that programming riddles will magically be the best benchmark for hiring, even though technology is very rarely the cause for any given startup’s success.

Know what you need

There’s such a wild gulf between what gets asked in interviews and what gets done in the gig’s daily grind that it’s a wonder how startups make it out of the initial incubation phase in the first place.

I’m a product engineer. I don’t have a formal CS background, but I build things for the web, and I’m really good at it. Not once in the last ten months that I’ve on-and-off interviewed have I ever seen anything remotely close to a view or a controller or even a model. Not every company has insisted upon using programming riddles as a hiring technique, but the ones that do almost exclusively focus on weird algorithmic approaches to problems that don’t exist in the real world.

Interviewer: How would you write a method to do this operation?

Me: writes a one-liner in Ruby

Interviewer: Okay now what if you couldn’t use the standard library? Imagine it’s a 200GB file and you have to do it all in memory in Ruby.

Me: Why the fuck would I do that?

Certainly there are some jobs where being extremely performant and algorithmically “correct” are legitimate to interview against. But look around: how many small, less-than-50-person startups are doing work like that? The dirty secret is most startups for the first few years are glorified CRUD apps, and finding well-rounded and diverse people who can have the biggest impact tend to be the ones who are comfortable wearing a lot of hats.

My favorite few tweets from this week talked about this:

Worry more about whether you’re self-selecting the wrong people into your organization.

Power dynamics

A huge problem with all this is that it creates a power dynamic that virtually all but assures that people who are bad at technical interviews will fail.

Algorithm-based challenges typically come from a place where the interviewer, in all their self-aggrandizing smugness, comes up with something they think demonstrates cleverness. A reliable bet is to try solving it with recursion from the start; it’s goddamn catnip for interviewers. If that doesn’t work, try doing it all in one pass rather than in an O(n) operation, because the extra 1ms you save in this use case will surely demonstrate your worth to the organization.

When you come at it from this perspective, you’re immediately telling your prospective coworker than “I have a secret that only I know right now, and I want you to arrive at this correct answer.” It becomes stressful because there is a correct answer.

Every single product I’ve built in my professional career has not had a correct answer. It’s more akin to carving a statue out of marble: you have a vague understanding of what you want to see, but you have to continually chip away at it and refine it until you end up with one possible result. You arrive at the answer, together, with your teammates. You don’t sit on a preconceived answer and direct your coworker to slug through it alone.

Collaborate

This is why I so strongly advocate for pair programming at some point in the interview process. Take an hour and knock off whatever bug or feature you were going to work on together. Not happening to be doing anything interesting today? The bug is too “boring”? Cool, then why are you working on it? If it’s representative of the real work that the candidate will face in the job, then it’s good enough to interview on. Besides, you can learn a lot from someone even in the simplest of fixes.

Build something real together. The very act of doing that entirely changes the power dynamic; I cannot stress that enough. Whereas previously you had someone struggling to find out a secret only you were initially privy to, you’re now working together on a problem neither of you have a firm answer to yet. Before it was adversarial; now it’s collaborative. It’ll put your candidate at ease, and they’ll be able to demonstrate their skillset to you much easier.

No one has any idea what they’re doing

I’ve heard — and experienced — so many things happening in tech interviews that are just bonkers.

You have stories from people like Max Howell who get rejected from jobs ostensibly because he’s not a good enough developer to whiteboard out algorithms, even though he built one of most popular tools for software developers today.

I interviewed for a director of engineering role last year for a startup with famously massive growth that had fundamental problems with their hundreds of developers not being able to get any product shipped. I had a good discussion with their CEO and CTO about overhauling their entire process, CI, deployment, and management structure, and then when I went in for the final round of interviews for this non-programming leadership role the interviews were done almost entirely by junior developers who asked me beginner JavaScript questions. It just boggles my mind.


Look, I get it. It takes time and effort to interview someone, and most of you just want to get back to building stuff. Coming up with a standard question lets you get away with doing more with less effort, and gives you a modicum of an ability for comparison across different candidates.

But really take a long look at whether this selects the right candidates. The skill set needed for most early startups — particularly of early employees — is a glorious, twisted mess of product, code, marketing, design, communication, and empathy. Don’t filter out those people by doing what a Microsoft or an Apple does. They’re big companies, and let me be the first to tell you: that ain’t you right now. You have different priorities.

It’s more work, but it makes for better companies and better hires, in my opinion. But what do I know; I failed those fucking tests anyway.

News stories from Saturday 16 January, 2016

Favicon for Kopozky 20:18 The Precipice » Post from Kopozky Visit off-site link

Comic strip: “The Precipice”

Starring: Mr Kopozky and The Developer


News stories from Friday 08 January, 2016

Favicon for Zach Holman 02:00 Fuck Your 90 Day Exercise Window » Post from Zach Holman Visit off-site link

There are a lot of problems with the compensation we give early employees at startups. I don’t know how to fix all of them, but one obvious area to start directing our anger towards is something we can fix relatively quickly: the customary 90 day exercise window.

90 days and poof

Most startups give you a 90 day window to exercise your vested options once you leave the company — either through quitting or through termination — or all of your unexercised options vanish.

This creates a perverse incentive for employees not to grow the company too much.

For example: say you’re employee number one at A Very Cool Startup, and, through your cunning intellect and a lot of luck and a lot of help from your friends, you manage to help grow the company to the pixie fairy magic dragon unicorn stage: a billion dollar valuation. Cool! You’re totes gonna be mad rich.

I climbed the bridge lol

Ultimately, you end up leaving the company. Maybe the company’s outgrown you, or you’re bored after four years, or your spouse got a new job across the country, or you’ve been fired, or maybe you die, or hey, none of your business I just want out dammit. The company’s not public, though, so everything becomes trickier. With a 90 day exercise window, you now have three months to raise the money to pay to exercise your options and the additional tax burdens associated with exercising, otherwise you get nothing. In our imaginary scenario, that could be tens or hundreds of thousands of dollars. And remember: you’re a startup worker, so there’s a good chance you’ve been living off a smaller salary all along!

So you’re probably stuck. Either you fork out enough dough yourself on a monumentally risky investment, sell them on the secondary market (which most companies disallow post-Facebook IPO), give up a portion of equity in some shady half-sale-loan thing to various third parties, or forfeit the options entirely.

I mean, you did what you were supposed to: you helped grow that fucking company. And now, in part because of your success, it’s too expensive to own what you had worked hard to vest? Ridiculous.

Solutions

How we got here wasn’t necessarily malicious. These 90 day exercise windows can likely be tied back to ISOs terminating, by law, at 90 days. NSOs came along for the ride. This was less problematic when we had a somewhat more liquid marketplace for employee equity. With IPOs taking much longer to happen combined with companies restricting sale on the secondary market, these 90 days have completely stifled the tech worker’s ability to even hold the equity they’ve earned, much less profit from it.

There’s a relatively easy solution: convert vested ISOs to nonquals and extend the exercise window from 90 days to something longer. Pinterest is moving to seven years (in part by converting ISOs to nonquals). Sam Altman suggests ten years. In either case, those are both likely long enough timespans for other options to arise for you: the company could go public (in which case you can sell shares on the open market to handle the tax hit), the company could fail (in which case you’re not stuck getting fucked over paying hundreds of thousands of dollars for worthless stock, which can even happen in a “successful” acquisition), you could become independently wealthy some other way, or the company could get acquired and you gain even more outs.

Naturally, modifying the stock agreement is a solution that only companies can take. So what can you, the humble worker bee, do?

The new norm

We need to encourage companies to start taking steps towards correcting the problems we see today. I want to see more employees able to retain the compensation they earned. I want to see this become the norm.

My friend’s trying to adopt some employee-friendly terms in the incorporation of his third startup, and he mentioned this to me specifically:

You have no idea how hard it’s been to try something different. Even tried to get a three year vest for my employees, because I think four years is a bullshit norm, and lawyers mocked me for 15 minutes. Said it would make my company uninvestable.

The more companies we can get shifting to these employee-friendly terms, bit by bit, the easier it is for everyone else to accept these as the norm. Start the conversation with prospective employers. Write and tweet about your own experiences. Ask your leadership if they’ll switch over.

Clap for ‘em

One final, important part is to applaud the companies doing it right, and to promote them amongst the startup community.

I just created a repository at holman/extended-exercise-windows that lists out companies who have extended their exercise windows. If you’re interested in working for a company that takes a progressive, employee-friendly stance on this, give it a look. If you’re a company who’s switched to a longer exercise window, please contribute! And if you’re at a company that currently only does 90 day exercise windows, give them a friendly heads-up, and hopefully we can add them soon enough.

You have 90 days to do this, and then I’m deleting the repo.

Just kidding.

News stories from Tuesday 05 January, 2016

Favicon for Doctrine Project 02:00 Doctrine DBAL 2.5.4 and 2.4.5 Released » Post from Doctrine Project Visit off-site link

Doctrine DBAL 2.5.4 and 2.4.5 Released

We are happy to announce the immediate availability of Doctrine DBAL 2.5.4 and 2.4.5.

DBAL 2.5.4

SQLite types weren’t correctly identified when whitespace was present in the table definitions: the DBAL now correctly recognizes that, and ignores the whitespace. #2272

constant PDO::PGSQL_ATTR_DISABLE_PREPARES is only defined when PGSQL support for PHP is enabled with PDO. The DBAL now checks whether the constant is available before relying on it. #2249

DBAL 2.4.5

This release backports a number of fixes that were already present in DBAL 2.5.3.

Specifically, following issues were solved:

  • SQLite offset with no limit support #1069
  • Fix removing autoincrement column from a primary key #1074
  • Infinite recursion on non-unique table/join alias in QueryBuilder #1079
  • Fix for bad profiling data, showing an indefinitely long query #1124
  • Fix incorrect ordering of columns in clustered indexes on sql server #1129
  • Avoid fatal error in array_merge while generating the table creation SQL #1141
  • template1 as default database for PostgreSQL #1162

Please be advised that this is the last 2.4.x release, and except for security releases, no further patches will be provided for DBAL 2.4: please upgrade to 2.5 as soon as possible.

Installation

You can install the DBAL component using Composer:

composer require doctrine/dbal:~2.5.4

Please report any issues you may have with the update on the issue tracker.

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.4 Released » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.4 Released

We are happy to announce the immediate availability of Doctrine ORM 2.5.4.

This release fixes an issue with how identifiers are used when building second level cache entries during hydration. #1568

Installation

You can install the ORM component using Composer:

composer require doctrine/orm:~2.5.4

Please report any issues you may have with the update on the issue tracker.

News stories from Thursday 31 December, 2015

Favicon for Doctrine Project 02:00 Cache 1.6.0 Released » Post from Doctrine Project Visit off-site link

Cache 1.6.0 Released

We are happy to announce the immediate availability of Doctrine Cache 1.6.0.

Cache 1.6.0

Support for PHP versions below 5.5.0 was removed: please remember that if you are still using PHP 5.4.x or lower, the PHP project does not provide support for those versions anymore. #109

Native APCu support was introduced: if you run newer versions of APCu, then you can use the new ApcuCache adapter. #115

A MultiPutCache interface was introduced: the CacheProvider implements it by default now. This interface can lead to improved performance when saving multiple keys at once, if your cache adapter supports such an operation. #117

The ArrayCache now honors the given cache entries TTL, making it possible to use it even in long running processes without the risk of dealing with stale data. #130

Installation

You can install the Cache component using the following composer.json definitions:

composer require doctrine/cache:^1.6

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Friday 25 December, 2015

Favicon for Doctrine Project 02:00 Doctrine DBAL 2.5.3 Released » Post from Doctrine Project Visit off-site link

Doctrine DBAL 2.5.3 Released

We are happy to announce the immediate availability of Doctrine DBAL 2.5.3.

The SQLServer platform support for pagination query modification was completely rewritten, improving stability and code quality as well as ease of maintenance. #818

Dependency constraints on the doctrine/common component supported versions were corrected, allowing users to install doctrine/common 2.6.* together with the DBAL. #2268

Installation

You can install the DBAL component using Composer:

composer require doctrine/dbal:~2.5.3

Please report any issues you may have with the update on the issue tracker.

Favicon for Doctrine Project 02:00 Common 2.5.3 and 2.6.1 Released » Post from Doctrine Project Visit off-site link

Common 2.5.3 and 2.6.1 Released

We are happy to announce the immediate availability of Doctrine Common 2.5.3 and 2.6.1.

Common 2.5.3

This release corrects an issue with the precedence of namespaces being matched by the SymfonyFileLocator #367.

Common 2.6.1

This release includes all of the fixes reported above for 2.5.3.

Installation

You can install the Common component using Composer and one of the following composer.json definitions:

{
    "require": {
        "doctrine/common": "~2.5.3"
    }
}
{
    "require": {
        "doctrine/common": "~2.6.1"
    }
}

Please report any issues you may have with the update on the issue tracker.

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.3 Released » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.3 Released

We are happy to announce the immediate availability of Doctrine ORM 2.5.3.

Dependency constraints on the doctrine/common component supported versions were corrected, allowing users to install doctrine/common version 2.6.* together with the ORM. This also means that PHP 7 scalar type-hints and return type declarations are now reflected in the generated proxy classes. #4884

Merging versioned entities caused the merged instance to have a null version: this is now fixed. #1573

It was impossible to use interface names when referencing entity types in DQL: it is now possible to do so when using the ResolveTargetEntityListener. #1573

Installation

You can install the ORM component using Composer:

composer require doctrine/orm:~2.5.3

Please report any issues you may have with the update on the issue tracker.

News stories from Tuesday 15 December, 2015

Favicon for Doctrine Project 02:00 Doctrine MongoDB ODM release 1.0.4 » Post from Doctrine Project Visit off-site link

Doctrine MongoDB ODM release 1.0.4

We are happy to announce the immediate availability of Doctrine MongoDB ODM 1.0.4.

Bug fixes in this release

Notable fixes may be found in the changelog. A full list of issues and pull requests included in this release may be found in the 1.0.4 milestone.

Installation

You can install the latest version using the following composer.json definitions:

{
    "require": {
        "doctrine/mongodb-odm": "^1.0.4"
    }
}

Doctrine MongoDB ODM 1.1 requires PHP 5.5+

The current master branch saw its PHP requirement bumped to 5.5 recently. If you are still using the master version in your project you should switch to a stable release as soon as possible:

{
    "require": {
        "doctrine/mongodb-odm": "^1.0"
    }
}

This will ensure you are using stable versions and will use 1.1 as soon as it’s released.

The upcoming releases of Doctrine MongoDB (1.3) and ODM (1.1) will also drop support for all MongoDB driver versions before 1.5. If you are still using an older driver, please consider upgrading it in order to receive future updates.

News stories from Tuesday 08 December, 2015

Favicon for Doctrine Project 02:00 Jira Issues Migration » Post from Doctrine Project Visit off-site link

Jira Issues Migration

We have started the migration of all our Jira tickets to Github Issues.

These last months we had a lot of troubles with our Jira and we just cannot find the time to update and maintain it anymore. On top of it, spam is causing more maintenance for us deleting user accounts and tickets. Sadly ther seeems to be no appropriate spam protection plugins and we couldn’t prevent this.

We are by no means unsatisfied with Jira, and to be honest we have been fighting this migration step as long as possible. Github Issues is a small fish against Jira’s powers, especially issue filtering, bulk operations and the Agile board. But for Doctrine its best to migrate to Github to reduce our maintenance and operations overhead and more tightly integrate with the tooling we already have.

For now Common, DBAL and ORM issues have been imported into Github using the amazing Importer API. Even though this API is still in Beta, it works quite flawlessly. If you are interested in our migration scripts see this repository in Github. They are very low-level and procedural but get the job done.

Jira has been changed into Read-Only mode for Common, DBAL and ORM projects, please use the Github based issue trackers instead from now on:

What is still missing?

  • Versions from Jira need to be exported and imported into Github releases with their release date, changelog and description.
  • Permanent redirects for both Jira versions and issues to their respective Github counterparts have to be prepared and dynamically generated from our webserver, when we decommission Jira. This will help us keep deeplinks to Jira issues.
  • Cleanup, categorize and prepare the newly imported Github issues.

We hope to complete this steps this week. The last one will take a bit longer.

What we could not import

We were not able to import attachments, issue status transitions and user/reporter assignments between Jira and Github. This information will be lost once we disable Jira.

News stories from Friday 04 December, 2015

Favicon for Doctrine Project 02:00 Common 2.5.2 and 2.6.0 Released » Post from Doctrine Project Visit off-site link

Common 2.5.2 and 2.6.0 Released

We are happy to announce the immediate availability of Doctrine Common 2.5.2 and 2.6.0.

Common 2.5.2

chmod() warnings caused by proxy generation are now silenced #383 DCOM-299.

SymfonyFileLocator#getAllClassNames() was dropping some classes: now fixed #384 DCOM-301.

Corrected fatal error triggered by AbstractManagerRegistry#getManagerForClass() when no parent class is found for a proxy #387 DCOM-303.

You can find the complete changelog for this release in the v2.5.2 release notes.

Common 2.6.0

This release includes all of the fixes reported above for 2.5.2, as well as following changes:

Proxy generation now supports PHP 7.0+ scalar type hints and return types #376 DCOM-294.

Switched autoloading to PSR-4 #389 DCOM-305.

Added a .gitattributes to the repositories, reducing the size of the package that is installed by composer #380 DCOM-296.

You can find the complete changelog for this release in the v2.6.0 release notes.

Installation

You can install the Common component using Composer and one of the following composer.json definitions:

{
    "require": {
        "doctrine/common": "~2.5.2"
    }
}
{
    "require": {
        "doctrine/common": "~2.6.01"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Thursday 03 December, 2015

Favicon for Doctrine Project 02:00 Cache 1.5.2 Released » Post from Doctrine Project Visit off-site link

Cache 1.5.2 Released

We are happy to announce the immediate availability of Doctrine Cache 1.5.2.

Cache 1.5.2

This release corrects a few bugs:

Fetching false values from the cache via ``fetchMultiple` was causing incorrect misses (#105).

Cache paths were exceeding the windows MAX_PATH length (#107).

The MongoDBCache was not failing silently in case of DB-side exceptions (#108).

You can find the complete changelog for this release in the v1.5.2 release notes.

Installation

You can install the Cache component using the following composer.json definitions:

{
    "require": {
        "doctrine/cache": "~1.5.2"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Tuesday 01 December, 2015

Favicon for Fabien Potencier 01:00 Announcing 24 Days of Blackfire » Post from Fabien Potencier Visit off-site link

I still remember the excitement I had 15 years ago when I discovered my first programming advent calendar; it was one about Perl. It was awesome, and every year, I was waiting for another series of blog posts about great Perl modules. When I open-sourced symfony1, I knew that writing an advent calendar would help adoption; Askeet was indeed a great success and the first advent calendar I was heavily involved with. I wrote another one, Jobeet, for symfony 1.4 some years later.

And today, I'm very happy to announce my third advent calendar, this one is about Blackfire. This time, the goal is different though: in this series, I won't write an application, but instead, I'm going to look at some development best practices which includes topics like profiling, performance, testing, continuous integration, and my vision on performance optimization best practices.

I won't reveal more about the content of the 24 days as the point is for you to discover a new chapter day after day, but I can already tell you that I have some great presents for you... just one small clue: it's about Open-Sourcing something. I'm going to stop this blog post now before I tell you too much!

Enjoy the first installment for now as it has just been published.

News stories from Monday 23 November, 2015

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.2 Release » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.2 Release

We are happy to announce the immediate availability of Doctrine ORM 2.5.2.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

Changes since 2.5.1

This is a list of issues resolved in 2.5.2 since 2.4.1:

Bug Fixes

Improvements

Documentation

Please report any issues you may have with the update on the mailing list or on JIRA.

News stories from Friday 06 November, 2015

Favicon for Doctrine Project 02:00 Doctrine Inflector release 1.1.0 » Post from Doctrine Project Visit off-site link

Doctrine Inflector release 1.1.0

We are happy to announce the immediate availability of Doctrine Inflector 1.1.0.

This release adds a feature that allows to upper-case words separated by a custom delimiter (#11).

We discovered that heroes, buffalo and tomatoes have something in common (#18).

“criteria” and “criterion” plural and singular form were reversed: now fixed (#19).

Additional inflections were introduced for more irregular forms (#20 #22 #24).

Last but not least, we now explicitly support and test against PHP 7 (#21).

Installation

You can install the Inflector component via the following``composer.json`` definition:

{
    "require": {
        "doctrine/inflector": "~1.1.0"
    }
}

News stories from Tuesday 03 November, 2015

Favicon for Doctrine Project 02:00 Doctrine MongoDB ODM release 1.0.3 » Post from Doctrine Project Visit off-site link

Doctrine MongoDB ODM release 1.0.3

We are happy to announce the immediate availability of Doctrine MongoDB ODM 1.0.3.

Reusing embedded documents

Until now, we have advised developers to deep clone embedded documents when changing owning documents; otherwise, strange things could happen. The reason being was that in order for ODM to properly track changes, it would store parent parent associations and manage the lifecycle of each document (top-level and embedded alike). It was therefore reasonable that Doctrine require relocated objects to be distinct instances.

Manual cloning is no longer needed!

With this relase, ODM will now do all the heavy lifting for you. Documents found to have been reused during a persist or flush lifecycle event shall be cloned by UnitOfWork automatically and updated on the parent document or collection.

Other bug fixes

Notable fixes may be found in the changelog. A full list of issues and pull requests included in this release may be found in the 1.0.3 milestone.

Work on 1.1 is starting and it will require PHP 5.5+

We are happy to announce that work on 1.1 is commencing! While no release dates have been scheduled, you can take a look at the 1.1 milestone for brief list of goodies that we intend to ship next. If you would like to suggest additional features or, better yet, help in with development, please get in touch. Currently, we are looking forward to implementing hydrated aggregation results, support especially now that MongoDB announced $lookup operator, available for everybody in 3.2 and custom collection classes for EmbedMany and ReferenceMany associations.

The current master branch will soon become development branch for 1.1 and the PHP requirement will be bumped to 5.5. If you cannot upgrade your PHP runtime, please continue to use the 1.0.x branch. If you are interested in testing the latest bug fixes (before we tag them), you may want to follow the 1.0.x branch.

News stories from Monday 02 November, 2015

Favicon for Doctrine Project 02:00 Cache 1.4.4 and 1.5.1 Released » Post from Doctrine Project Visit off-site link

Cache 1.4.4 and 1.5.1 Released

We are happy to announce the immediate availability of Doctrine Cache 1.4.4 and 1.5.1.

Cache 1.4.4

This release fixes the version number reported in Doctrine/Common/Cache/Version::VERSION

Additionally, a flaw in CacheProvider#fetchMultiple() was fixed: null and false-y values being fetched were considered cache misses, but are now correctly included in the results (#104).

You can find the complete changelog for this release in the v1.4.4 release notes.

Cache 1.5.1

This release includes all the fixes mentioned in the above 1.4.4 patch.

You can find the complete changelog for this release in the v1.5.1 release notes.

Installation

You can install the Cache component using Composer either of the following composer.json definitions:

{
    "require": {
        "doctrine/cache": "~1.4.4"
    }
}
{
    "require": {
        "doctrine/cache": "~1.5.1"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Wednesday 28 October, 2015

Favicon for Doctrine Project 02:00 Cache 1.4.3 and 1.5.0 Released » Post from Doctrine Project Visit off-site link

Cache 1.4.3 and 1.5.0 Released

We are happy to announce the immediate availability of Doctrine Cache 1.4.3 and 1.5.0.

Cache 1.4.3

This release fixes some minor issues that prevented various cache adapters from correctly reporting success or failure in case of cache key deletion (#95).

Another issue being fixed is related to CacheProvider#fetchMultiple(), which was failing to operate when an empty list was given to it (#90).

Also, the CacheProvider does not store version information internally unless CacheProvider#deleteAll() was called at least once (#91).

You can find the complete changelog for this release in the v1.4.3 release notes.

Cache 1.5.0

This release includes all the changes released with version 1.4.3, as well as further bug fixes and improvements that will require you to clean your caches (if file-based) during the upgrade.

PHP7 support is now guaranteed (#92).

File based caches now use a much lower number of directories (#94).

Proper support for wincache multi-get was added (#97).

Predis cache adapter now relies on the Predis\ClientInterface (#87).

You can find the complete changelog for this release in the v1.5.0 release notes.

Credits

We would like to thank all contributors that patiently supported us in fixing the file-based cache directory structure long-standing issues, and especially:

Installation

You can install the Cache component using Composer either of the following composer.json definitions:

{
    "require": {
        "doctrine/cache": "~1.4.1"
    }
}
{
    "require": {
        "doctrine/cache": "~1.5.0"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Monday 12 October, 2015

Favicon for Zach Holman 02:00 Dev Evangelism » Post from Zach Holman Visit off-site link

I think the first question that should be asked after every developer evangelist finishes their talk is ”YO DO YOU ACTUALLY BELIEVE WHAT U JUST SAID THO”.

A sneaky world

Dev evangelism is this weird world where companies pay employees to go out to conferences and meetups — really whoever will have ‘em — and give talks. This is definitely not a new thing, although the last few years it’s been feeling more and more prevalent.

Crowd

I mean, I get it. Evangelism gets your foot in the door in a lot of hard areas right now: hiring, getting the word out about your company, and showing people how to use your product. It’s not a horrible way to do it, either: I’d much rather see companies support conferences as a way of hiring rather than pay recruiters to spam every developer they can get their hands on.

I’m just worried about how some of these companies are taking a good thing and twisting it for their own purposes.

Supporting whatever

I gave a lot of talks while at GitHub, and I started hearing “oh yeah you’re that dev evangelist at GitHub!” from time to time. This always made me feel funny because I considered myself a developer first and foremost; I had the most commits in the company, dammit, why don’t people who don’t have access to the repo just inherently know that? Sheesh.

I think “evangelism” is done best when you pull from the people actually doing the work. GitHub used to support their employees and let them give talks at any conference they were invited to or were accepted to speak at. The reason I liked this policy was that the goal was to support employees, which in turn led to better talks. It was pretty genuine, and the whole community gained from it. We were completely hands-off when it came to what the talks were about… some were inevitably about experiences at GitHub, some were about programming, and some were about completely different topics altogether.

I think that’s a pretty important part right off the bat that a lot of companies tend to miss. The best talk is one where 1) the speaker really wants to give it, and 2) it’s something that’s drawn from experience rather than having the explicit goal to promote the company. Both of these are problematic if the speaker themselves aren’t deep in the trenches — gaining actual experience to share — rather than talking about theoretical things they gleaned from working in the industry ten years ago.

If a company’s spending money for the purposes of “evangelism”, they’re better off letting their employees talk about what’s most meaningful to share with other people rather than what directly benefits the company.

Sell without selling

I’ve gotten a number of offers lately from companies who don’t get this. They think I gave talks to sell the company, when really I gave talks because I thought they would be helpful to other people. My talks came from real pain: I had worked in bad environments before, and I could say hey, let me tell you a better way to work! It was lovely to share these things with people who might be in the same situation.

When these weirdo companies pinged me, they assumed I’m going to swing in there, drop a ton of talks about how their real-time app for middle managers is going to change the world, and they’ll make oodles of money. They assume companies can bend speakers to better amplify their own message.

That’s a fucked way of doing things. What’s more, the average person in the audience is going to see through this and tune out (or worse, make a mental note that they think your company’s fucked).

Interestingly enough, of course, making more genuine talks that resonate with people is a better way to market your company than trying to set out and market your company in the first place.

Don’t be afraid to invest in employees in areas that might not immediately contribute to your bottom line. Remember that talks are a great time to share what you’ve experienced with others, and you don’t have to monetize every single moment of that.

Favicon for Zach Holman 02:00 Opt-in Transparency » Post from Zach Holman Visit off-site link

Behold… a great way to make your employees feel like shit:

Employee: Yeah… I just don’t really understand why we’re building it out this way. It doesn’t really make sense, and I think it’s going to ultimately be harmful for the company.

Manager: Just build it, please.

This exchange — though not uncommon — isn’t going to go away any time soon. At the end of the day, there’s a power relationship happening, and the employee ain’t gonna be the one to win out.

There’s a way to help combat the effects of it, though: context.

Yeah, but why?

There’s this concept I’ve been fascinated with for the last couple of years, and I’m just going to give it a name: opt-in transparency. Can you be fully open as an organization, but not force that openness upon everyone by default? Basically, I want as many people in the organization to have access to as many things as possible, but we’re all busy, and unless I want to go out of my way to dig into something, my time is respected enough not to bother me with every little detail.

Decision context

Here’s one of my favorite stories from my time at GitHub:

HR was making some change to health insurance. Insurance is not something that’s in my wheelhouse; I’m glad we had good coverage, of course, but I’ve been lucky enough to not be impacted by it one way or another too much.

That said, when the company-wide announcement about the new change in health plans came through, some minor thing in it triggered warning bells in my head. Whoa wait now, this seems a little shittier, what the fuck are they doing here? SOMEONE IS WRONG AND I KNOW THIS BECAUSE IM RIGHT

So I did what any self-aggrandizing self-crowned hero of the people would do: I blew the dust off my special custom-order Flamewar Keyboard 3000, plugged it in, and prepared to really bring the weight of My Unique Perfect Logic™ down on this thread.

Right before I was going to start typing, I noticed that the HR team member who posted the initial thread (thanks Heather!) had three URLs appended to the bottom of the post. These were links to an issue and two pull requests to an internal HR documentation repository where the HR team had discussed the changes that were announced in the thread. Curious, I clicked into them and saw that the discussions themselves spanned several weeks and several hundred comments, all covering the proposed changes.

By the time I finished reading the discussions, I was fully on board with the change, and I found it’s what I would have done had I been in their shoes during the decision process.

That was a pretty powerful realization. It’s one of those things where the output of a decision — in this case, changing insurance — didn’t immediately make sense to me, but the context surrounding the decision made all the sense in the world.

Design context

Over the years I would occasionally butt heads with my friend Kyle Neath, the first designer and former head of product at GitHub.

A lot of it stemmed from my reactions to a possible screens he was designing. I’d say, hey, I’m not sure I really dig the latest comp you posted.

And more often than not — and this is a mark of a great designer — he’d come back with already-sketched pages of same screen pictured six months, twelve months, three years, and five years from now. He gave us context behind his decisions. And almost every single time — that motherfucker — he would win the argument this way. By showing that entire context of his future vision detailed out, I could very comfortably buy into a decision that I don’t necessarily agree with 100% today, because I’ve bought into the steps needed to get to the long-term vision.

Sharing that type of context can be very, very valuable, and it forces you to think broader than just today’s problems.

Async and open

This is part of the reason why I advocate so strongly for remote-first and asynchronous companies. By the very nature of how you work internally, you’re creating self-documenting progress upon which anyone in the future can come back and reflect.

People promote transparency as a huge culture value, and, while I don’t think that’s wrong, it really depends on how you use it. As the company grows larger, I don’t want to be inundated with every single goddamn decision. It becomes a paralyzing aspect of the culture, and pretty soon no one can get anything done. You don’t want to be the company that’s full of shippers who can’t ever get anything shipped.

If, on the other hand, you allow people to opt into the full context of these discussions, you promote a healthy and sheltered creative process, but still encourage others into your discussions only if they are deeply passionate about helping you out. From the outsider’s perspective you might not care about 95% of the discussions happening in the company, but you might spend that remaining 5% on something you can genuinely pitch in and improve.

Opt-in transparency is a good balance of transparency, inclusiveness, and creative problem solving. Try to aim for those goals rather than pushing all your decision making behind closed doors. It’s a better way to create.

News stories from Thursday 01 October, 2015

Favicon for Zach Holman 02:00 Remote-First vs. Remote-Friendly » Post from Zach Holman Visit off-site link

Yeah! We’re remote friendly! We got Bob who lives in San Diego, we’re based in San Francisco, and we have a Slack room, and people usually can come in to work at ANY time (between 8am and 9am), but really fuck Bob he’s kind of a loner and we’re going to probably let him go soon anyway, but yeah you can totes work remote!

We’re kind of in the surly teenager phase of remote work right now. A lot of companies are using tools like Slack, Hangouts, and GitLab, so our technical chops are heading in the right direction… but our processes and workflows still have a long way towards maturity.

Just because you happen to use chat rooms doesn’t mean you’ve suddenly become a glorious haven for remote workers, dammit.

Tools- and process-first

Look: to some extent, I don’t even really care if everyone on your team actually lives in the same city. That’s great — they could live on the same block for all I care. Maybe you chain them to their desks in some sort of twisted open office floor plan perversion, who knows. The point is that our tools have come a long way, but unless we adjust our processes, we won’t use those tools to their fullest extent.

xubbers meetup

I think there’s a split between being remote-friendly — hiring some workers in a different city — and remote-first, meaning you build your development team around a workflow that embraces the concepts of remote work, whether or not your employees are remote.

By forcing yourself to use chat instead of meetings, by forcing yourself to use chatops to mercilessly automate every single manual action, you end up creating things faster, with more built-in context, and greater ability to share your knowledge across the organization.

If you’re not working in a remote-first environment today, not only are you not going to have a remote-friendly environment tomorrow, but you’re going to eventually have a hard time retaining talent and keeping competitive pace in the future.

The world of work is changing. That’s just the way it is.

Other ways to not fuck up remote work

Assuming you are operating in a remote-first environment and you want to dip your toes into hiring some remote workers, here’s a couple pointers that you might want to keep in mind:

Geographical makeup of teams

The number one indicator of well-functioning remote teams inside of a company is a reinforcement of remote employees in the structure of the team itself.

In simpler words:

Teams with one or two remote employees on them are fucked, and teams with a larger proportion tend to do better.

I’ve seen this play out again and again across many different spectrums of companies. It seems to be such a clear indicator that if you’re the only remote employee on a team, I’d recommend you might be proactive and try moving to a different team entirely (unless the team itself is particularly adept at working in a remote-first environment).

I think the rationale behind this perspective makes sense, and I don’t think it’s inherently mean-spirited, either: if seven people are in the same room in San Francisco and someone else is in Singapore, the seven locals are naturally going to have more informal and formal conversations about the product, unless they go out of their way to move their conversation online. It’s doable, but it takes a dedicated team to do that.

If you’re going to have a go at fostering a strong remote culture in your company, try structuring a diverse representation of geographies on a team. If you don’t have enough of one or the other, aim for either all-remote or all-local teams… it’s better than having the odd person stuck as the de facto outcast.

Timezones, not just geography

Having remote workers is one thing, but having remote workers across timezones is another.

I’ve seen some companies proudly say their culture is remote, but their workers tend to line up between Seattle, Portland, and San Francisco, all in one timezone. Even if they’re stretched across the United States or Europe, that’s still only three or four hours across, and that’s close enough to enforce a culture of a “work day”.

Distributed map

Again, that’s fine if that’s the culture you’re looking to be. But if you’re really aiming for a remote-first culture, spreading your team across really varying timezones will force you to use tools differently. You’ll lean more heavily on email, chat, pull requests and other asynchronous work rather than relying upon meetings, daily standups, and power work luncheons.

Just like the aforementioned diversity of remote/local ratio splits on teams, try to enforce a split of timezones as well, where possible. Baking that into the structure of the team itself helps you stay remote-first by default.

Face time

Lastly, and very simply: you can’t be digital all the time. If you want to build a great remote environment, you need to front the dough to have some in-person face time from time to time. Fly people in, get them meeting each other in meatspace, and make things a little more human.

Hack house

It’s amazing what you can accomplish in a two day trip. Creative problem solving becomes easier, people identify closer to real faces instead of just avatars, and all around it can be a great experience than sitting around computers all the time.


I’m pretty ecstatic that so many companies are getting better at remote work… I really am. When I first wrote How GitHub Works, a lot of this stuff was still a little amorphous at the time. Seeing the blistering growth of Slack and other tools over the last few years has been really lovely; I think people are really starting to get it.

But there’s always room to improve, of course! There really is a big gulf between being remote-friendly and being remote-first when you’re helping to build your culture, and it’s important to focus on ingraining these things into your process early and often.

News stories from Wednesday 16 September, 2015

Favicon for Doctrine Project 02:00 Doctrine DBAL 2.5.2 released » Post from Doctrine Project Visit off-site link

Doctrine DBAL 2.5.2 released

We are happy to announce the immediate availability of Doctrine DBAL 2.5.2.

This version fixes a regression where dropping a database on PostgreSQL didn’t work properly anymore as well as several other issues.

You can find all the changes on JIRA:

You can install the DBAL using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/dbal": "2.5.2"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Wednesday 02 September, 2015

Favicon for Grumpy Gamer 09:00 Happy Birthday Monkey Island » Post from Grumpy Gamer Visit off-site link

I guess Monkey Island turns 25 this month. It’s hard to tell.

mi_title_ega.jpg

Unlike today, you didn’t push a button and unleash your game to billions of people. It was a slow process of sending “gold master” floppies off to manufacturing, which was often overseas, then waiting for them to be shipped to stores and the first of the teaming masses to buy the game.

Of course, when that happened, you rarely heard about it. There was no Internet for players to jump onto and talk about the game.

There was CompuServe and Prodigy, but those catered to a very small group of very highly technical people.

Lucasfilm’s process for finalizing and shipping a game consisted of madly testing for several months while we fixed bugs, then 2 weeks before we were to send off the gold masters, the game would go into “lockdown testing”.  If any bug was found, there was a discussion with the team and management about if it was worth fixing.  “Worth Fixing” consisted of a lot of factors, including how difficult it was to fix and if the fix would likely introduce more bugs.

Also keep in mind that when I made a new build, I didn't just copy it to the network and let the testers at it, it had to be copied to four or five sets of floppy disk so it could be installed on each tester’s machine.  It was a time consuming and dangerous process. It was not uncommon for problems to creep up when I made the masters and have to start the whole process again. It could take several hours to make a new set of five testing disks.

It’s why we didn’t take getting bumped from test lightly.

During the 2nd week of “lockdown testing”, if a bug was found we had to bump the release date. We required that each game had one full week of testing on the build that was going to be released. Bugs found during this last week had to be crazy bad to fix.

When the release candidate passed testing, it would be sent off to manufacturing. Sometimes this was a crazy process. The builds destined for Europe were going to be duplicated in Europe and we needed to get the gold master over there, and if anything slipped there wasn’t enough time to mail them. So, we’d drive down to the airport and find a flight headed to London, go to the gate and ask a passenger if they would mind carry the floppy disks for us and someone would meet them at the gate.

Can you imagine doing that these days? You can’t even get to the gate, let alone find a person that would take a strange package on a flight for you. Different world.

floppies.jpg

After the gold masters were made, I’d archive all the source code. There was no version control back then, or even network storage, so archiving the source meant copying it to a set of floppy disks.

I made these disk on Sept 2nd, 1990 so the gold masters were sent off within a few days of that.  They have a 1.1 version due to Monkey Island being bumped from testing. I don’t remember if it was in the 1st or 2nd week of “lockdown”.

It hard to know when it first appeared in stores. It could have been late September or even October and happened without fanfare.  The gold masters were made on the 2nd, so that what I'm calling The Secret of Monkey Island's birthday.

MI1_island_small.jpg

Twenty Five years. That’s a long time.

It amazes me that people still play and love Monkey Island. I never would have believed it back then.

It’s hard for me to understand what Monkey Island means to people. I am always asked why I think it’s been such an enduring and important game. My answer is always “I have no idea.”

I really don’t.

I was very fortunate to have an incredible team. From Dave and Tim to Steve Purcell, Mark Ferrari, an amazing testing department and everyone else who touched the game's creation. And also a company management structure that knew to leave creative people alone and let them build great things.

award.jpg

Monkey Island was never a big hit. It sold well, but not nearly as well and anything Sierra released. I started working on Monkey Island II about a month after Monkey Island I went to manufacturing with no idea if the first game was going to do well or completely bomb. I think that was part of my strategy: start working on it before anyone could say “it’s not worth it, let's go make Star Wars games”.

There are two things in my career that I’m most proud of. Monkey Island is one of them and Humongous Entertainment is the other. They have both touched and influenced a lot of people. People will tell me that they learned english or how to read from playing Monkey Island. People have had Monkey Island weddings. Two people have asked me if it was OK to name their new child Guybrush. One person told me that he and his father fought and never got along, except for when they played Monkey Island together.

It makes me extremely proud and is very humbling.

I don’t know if I will ever get to make another Monkey Island. I always envisioned the game as a trilogy and I really hope I do, but I don’t know if it will ever happen. Monkey Island is now owned by Disney and they haven't shown any desire to sell me the IP. I don’t know if I could make Monkey Island 3a without complete control over what I was making and the only way to do that is to own it. Disney: Call me.

Maybe someday. Please don’t suggest I do a Kickstarter to get the money, that’s not possible without Disney first agreeing to sell it and they haven’t done that.

Anyway…

Happy Birthday to Monkey Island and a huge thanks to everyone who helped make it great and to everyone who kept it alive for Twenty Five years.

fan_letter.jpgfan_pic1b.jpg

fan_letter2c.jpg

I thought I'd celebrate the occasion by making another point & click adventure, with verbs.

News stories from Monday 31 August, 2015

Favicon for Doctrine Project 02:00 Security Misconfiguration Vulnerability in various Doctrine projects » Post from Doctrine Project Visit off-site link

Security Misconfiguration Vulnerability in various Doctrine projects

We are releasing new versions of Doctrine Cache, Annotations, ORM and MongoDB ODM today that fix a security misconfiguration vulnerability. This vulnerability was assigned CVE-2015-5723. It requires an attacker to have direct access to a user of the server to be exploitable. We consider exploitability to be low to medium.

Exploiting this vulnerability can allow attackers to perform local arbitrary code execution with privileges of other users (privilege escalation).

You are only affected by this vulnerability, if your application runs with a umask of 0.

Please update:

  • Annotations to 1.2.7
  • Cache to 1.4.2 or 1.3.2
  • Common to 2.5.1 or 2.4.3
  • ORM to 2.5.1 or 2.4.8
  • MongoDB ODM to 1.0.2
  • MongoDB ODM Bundle to 3.0.1

If you want to check the fix or apply patch manually, we provide a Gist with all patches.

If you cannot upgrade, see our notes below how to mitigate the problem without having to patch the code.

We want to thank Ryan Lane for finding the vulnerability, Jonathan Eskew from the AWS team to pass this security vulnerability to us and Anthony Ferrara for helping us discuss and find solutions to the problem.

Details

Doctrine uses different kinds of caches and some of them read the cached entries using require or include to make use of APC or Opcache. In case of proxy generation we actually need to execute the code to make a new auto-generated class part of the code-base.

Doctrine always uses mkdir($cacheDirectory, 0777); on many of those caches directories. If your application is running with umask(0), this allows an attacker to write arbitrary code into the cache directory which can be executed with the user privileges of the webserver.

Running your application with umask(0) is not generally a good idea, but is sometimes recommended as simple solution to solve filesystem access problems when a console and a web user both write to a common cache directory. In combination with a cache that executes the cache entries as code, this can allow local arbitrary code execution.

The patches released today change all caches that execute code to always use a default mask of 0775 instead of 0777.

  • In case of Cache and Annotations we solve this by implementing a userland configurable umask that defaults to 0002. We apply this to every mkdir and chmod so that you can reconfigure to another mask if you must.
  • In all the other cases its a hardcoded change to 0775 for directories and 0664 for files.

We are aware that if you depend on umask(0), this is a very inconvenient change, because your code will break when different users write to the same cache directory.

We feel it is not safe to make developers and operations responsible to know how to secure our cache implementations. They are often third party libraries to other open-source systems, we want them to be safe no matter how users configure their system.

Am I vulnerable?

Your application must run with umask(0) for this vulnerability to be exploitable. This must not necessarily be an explicit call to the PHP function, it can also happen if you misconfigured your shell or webserver to run with umask 0 by default.

You can easily check this by calling echo umask(); from both the shell and your webserver. It will return 0, if you are potentially vulnerable.

Second, you must be using using Doctrine with the Annotations FileCache or the PhpFileCache Cache implementation or one of the ORM or ODM ProxyGenerators. Of course this vulnerability can also be present in any other library or your own application, when you dynamically generate PHP code into a directory with world writable permissions.

Do you provide fixes for all branches of all affected components?

No, fixes are only applied to the most recent versions of Doctrine components.

If you are running older components and don’t want to or cannot upgrade, you should look into the sections about immediate and proper fixes below, that show solutions that don’t require upgrading your code.

If your system and application are correctly setup, it is also likely that you are not vulnerable. See the next section for information about that.

Is there an immediate fix when I can’t upgrade?

Yes, as an immediate fix just make sure that your application runs with a non-zero umask all the time. Call umask(umask() | 0002); early in your code to prevent PHP from ever creating world writeable files and directories.

Warning: It can break your application if it relies on running with umask(0);.

This is not sufficient though, because the call to umask is not thread safe and a call to this function later in the code can reset the umask for all requests currently running. That means you must identify all code that calls umask(0); and change it.

When you are unsure if your generated cache is clean, you can regenerate all files after you have changed the umask of your application.

Is there a proper fix or security best-practive to avoid this issue?

Yes, the best way to fix this problem is to always execute PHP code for a single application with the same user, independent of being called from the webserver, php-fpm or the shell. In this case you can always create directories with the default permissions of 0775 and files with 0664:

<?php
// safety measure to overrule webserver or shell misconfiguration
umask(umask() | 0002);

mkdir("/some/directory", 0775);
file_put_contents("/some/directory/file", "data");
chmod("/some/directory/file", 0664);

On most linux distributions it is possible to execute cronjobs or supervisord jobs with the www-data, nginx or apache users that the webserver runs with.

Another way would be to use more advanced permission systems in Linux such as chmod +a or setfacl, both of which are not available on all distributions though.

Isn’t everyone just using 0777/0666 everywhere?

Yes, this practice is extremely wide-spread in many projects. This is why we think it is very important to make sure your application runs with a proper umask.

However, in our case the potential vulnerability is more severe than usual, because we use require/include to execute the written cache files, which can allow an attacker with access to a local user the possibility for executing arbitrary code with the webservers user.

Code that is reading the generted/cache files using fopen/file_get_contents could “only” be poisoned with invalid or wrong data by an attacker. This is severe by itself, but does not allow arbitrary code execution.

We want users of Doctrine to be safe by default, so we are changing this even if it will cause inconveniences.

We have also notified as many OSS projects of this beforehand, mainly through PHP-FIG, because of the wide-spread practice. Several of them are preparing security releases for their libraries as well.

Again, the nature of this issue is mostly remedied by not running with umask of zero, so make sure this is the case for your applications.

Questions?

If you have questions you can signup to the Doctrine User Mailinglist and ask there or join #doctrine IRC Channel on Freenode.

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.1 and 2.4.8 released » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.1 and 2.4.8 released

We are happy to announce the immediate availability of Doctrine ORM 2.5.1 and 2.4.8.

This versions include a fix for the Security Misconfiguration Vulnerability described in an earlier blog post today.

Here are the changelogs:

Changelog 2.5.1

  • DCOM-293: Fix for Security Misconfiguration Vulnerability
  • DDC-3831: Fixed issue when paginator orders by a subselect expression
  • DDC-3699: Fix bug in EntityManager#merge: Skipping properties if they are listed after a not loaded relation
  • DDC-3684: Fix bug ClassMetadata#wakeupReflection when used with embeddables and Static Reflection
  • DDC-3683: SecondLevelCache: Fix bug in DefaultCacheFactory#buildCollectionHydrator()
  • DDC-3667: PersistentCollection: Fix BC break when creating empty Array/PersistentCollections

Changelog 2.4.8

This release contains several fixes that have been in 2.5.0 already and are just backported to 2.4 for convenience. This is the last release in the 2.4 branch and you should upgrade to 2.5.

  • DCOM-293: Fix for Security Misconfiguration Vulnerability
  • DDC-3551: Fix difference between DBAL 2.4 and 2.5 concerning platform initialization and version detection.
  • DDC-3240: EntityGenerator: Fix inheritance in Code-Generation
  • DDC-3502: EntityGenerator: Fixed parsing for php 5.5 ”::class” syntax
  • DDC-3500: Joined Table Inheritance: Fix applying ON/WITH conditions to first join in Class Table Inheritance
  • DDC-3343: Entities should not be deleted when using EXTRA_LAZY and one-to-many
  • DDC-3619: Bugfix: Prevent Identity Map Garbage Collection that can cause spl_object_hash collisions
  • DDC-3608: EntityGenerator: Properly generate default value from yml & xml mapping
  • DDC-3643: EntityGenerator: Fix EntityGenerator RegenerateEntityIfExists not working correctly.

As usual you can grab the latest versions from Composer.

News stories from Tuesday 18 August, 2015

Favicon for Doctrine Project 02:00 Doctrine MongoDB ODM release 1.0.0 » Post from Doctrine Project Visit off-site link

Doctrine MongoDB ODM release 1.0.0

In observance of August 18th, the day that Jon Wage tagged Doctrine MongoDB ODM’s first BETA release, we’ve come together for a big celebration. From humble beginnings as a weekend hack to port Doctrine 2’s data mapper pattern to NoSQL, the ODM quickly became a beast of a project and cut its teeth on production servers early on as a core dependency of the very first Symfony2 startups. Today, after five years of adoption, improvements, refactoring, and countless jokes… we are very happy to announce the immediate availability of Doctrine MongoDB ODM 1.0.0!

What is new in 1.0.0?

For our first stable release, we focused on fixing most known bugs (some of which were open for years), hardening existing features, and straightening out ODM’s behaviour and correctness where possible. In hopes of ensuring a pleasant upgrade experience, we have prepared a checklist for you, which highlights the most important changes that may require your attention. A complete list of resolved issues and pull requests may be found on GitHub under the 1.0.0 milestone.

Behind the scenes: Doctrine MongoDB 1.2.0

We are also happy to announce the immediate availability of Doctrine MongoDB 1.2.0, which is the underlying driver abstraction layer employed by the ODM. In particular, this release sports a brand new Aggregation Builder, along with improved query builder support for update operators and and full-text search introduced in MongoDB 2.6. For a full list of closed issues and pull requests, please see the release notes on GitHub.

Stop fooling around, I want my BETA back!

We apologize for any inconvenience, but Doctrine MongoDB ODM has officially gone stable and we don’t intend on shipping more BETAs anytime soon. Well, at least not until work begins on 2.0 :D

News stories from Friday 24 July, 2015

Favicon for Zach Holman 02:00 Diffing Images on the Command Line » Post from Zach Holman Visit off-site link

So about a year ago I realized that a play on Spaceman Spiff — one of Calvin’s alter-egos — would be a great name for a diffing tool. And that’s how spaceman-diff was born.

Then I forgot about it for a year. Classic open source. But like all projects with great names, it eventually came roaring back once I was able to make up an excuse — ANY mundane excuse — for its existence.

So today I’ll shout out to spaceman-diff, a very short script that teaches git diff how to diff image files on the command line.

Most of the heavy lifting is handled by j2pa: spaceman-diff is just a thin wrapper around it that makes it more suitable for diffing.

Install

This ain’t the README, dammit, so go to the repo to learn about all of that junk.

Learning via Git internals

Part of the fun of doing this (of doing anything silly like this, really) is digging into your tools and seeing what’s available to you. Writing spaceman-diff was kind of a fun way to learn a little bit more about extending Git’s diffing workflow.

There’s a couple different approaches to take to do this within Git. The first was slightly naive and basically involved overriding git-diff entirely. That way, spaceman-diff handled all the file extension checks and had quite a bit more control over the actual diff itself. git-diff was invoked using an external diff tool set up with gitattributes. If the file wasn’t an image, we could pass the diff back to git-diff using the --no-ext flag. This was cool for awhile, but it became problematic when you realize your diff wrapper would have to support all flags and commands passed to git-diff so you can fall back correctly (and, because of how Git passes commands to your external diff script, you don’t have access to the original command).

Another option is to use git difftool here. It’s actually a decent approach if you’re looking to completely replace the diffing engine entirely. Maybe you’re writing something like Kaleidoscope, or maybe a tool to view diffs directly on Bitbucket instead of something locally. It’s pretty flexible, but with spaceman-diff we only want to augment Git’s diff rather than rebuild the entire thing. It’d also be great to let people use git-diff rather than try to remember to type git-difftool when they want to diff images.

The Pro Git book has a nice section on how to diff binary files using gitattributes. There’s even a section on image files, although they use textconv, which basically takes a textual representation of a file (in their case, a few lines of image exif data: filesize, dimensions, and so on), and Git’s own diffing algorithm diffs it as normal blocks of text. That’s pretty close to what we want, but we’re not heathens here… we prefer a more visual diff. Instead, we use gitattributes to tell Git to use spaceman-diff for specific files, and spaceman-diff takes over the entire diff rendering at that point.


Nothing ground-breaking or innovative in computer science happening here, but it’s a fun little hack. Git’s always interesting to dive into because they do offer a lot of little hooks into internals. If you’re interested in this, or if you have a special binary file format you use a lot that could be helpful as a low-fi format diff, take a peek and see what’s available to you.

Provided, of course, you have a great pun for your project name. That comes first.

News stories from Sunday 05 July, 2015

Favicon for Fabien Potencier 00:00 "Create your Own Framework" Series Update » Post from Fabien Potencier Visit off-site link

Three years ago, I published a series of articles about how to create a framework on top of the Symfony components on this blog.

Along the years, its contents have been updated to match the changes in Symfony itself but also in the PHP ecosystem (like the introduction of Composer). But those changes were made on a public Github repository, not on this blog.

As this series has proved to be popular, I've decided a few months ago to move it to the Symfony documentation itself where it would be more exposed and maintained by the great Symfony doc team. It was a long process, but it's done now.

Enjoy the new version in a dedicated documentation section, "Create your PHP Framework", on symfony.com.

News stories from Wednesday 24 June, 2015

Favicon for the web hates me 10:00 Projektwerkstatt: SecurityGraph » Post from the web hates me Visit off-site link

Ich arbeite für einen großen Verlag und wir haben sicherlich 500 Softwarekomponenten um Einsatz. Das Meiste davon ist wahrscheinlich PHP. Viel Symfony, Symfony2, Drupal, WordPress. Ihr kennt die üblichen Verdächtigen. Die Hauptframworks aufzuzählen fällt keinem von uns schwer, dummerweise wissen wir aber eigentlich gar nicht so wirklich, was wir alles noch nebenher am Laufen haben. […]

The post Projektwerkstatt: SecurityGraph appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs

News stories from Tuesday 23 June, 2015

Favicon for the web hates me 09:00 Projektwerkstatt: getYourFoundation.io » Post from the web hates me Visit off-site link

Tag zwei unserer kleinen Kreativreihe. Gestern ging es um eine tiefere Integration von Twitter in WordPress und heute wird es wieder ein wenig technischer. Aber erstmal von vorne. In der letzten Zeit hatte ich mal wieder das Glück ein wenig zu programmieren. Seitdem ich Teamleiter bin, komme ich leider nicht mehr so oft dazu, was […]

The post Projektwerkstatt: getYourFoundation.io appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs

News stories from Monday 22 June, 2015

Favicon for the web hates me 14:00 Projektwerkstatt – twitter@wp » Post from the web hates me Visit off-site link

Fangen wir also mit dem ersten Teil der Projektwerkstatt-Woche an. Die Idee ist schon wenig älter, aber wie ich finde immer noch gut. Wie ihr ja wisst, sind wir mit unserem Blog auch auf Twitter. Ganze 1431 Follower können wir mit stolz aufzählen. Zusätzlich setzen wir auf WordPress auf, auch wenn mir die Technik dahinter […]

The post Projektwerkstatt – twitter@wp appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs
Favicon for the web hates me 09:45 Woche der Projektideen » Post from the web hates me Visit off-site link

Los geht es mit einem kurzen Beitrag, also vielmehr einer Ankündigung. Ich habe letzte Woche mal wieder die Zeit gehabt einige meiner Geschäftsideen aufzuschreiben und da ich sie, wie so oft nicht, nicht alle selber umsetzen kann, stelle ich sie euch vor und vielleicht findet sich ja ein Team, das Bock drauf hat. Ihr werdet […]

The post Woche der Projektideen appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs

News stories from Friday 19 June, 2015

Favicon for the web hates me 14:00 Highlights 2014 » Post from the web hates me Visit off-site link

Leider haben wir es vor sechs Monaten vergessen. Der Vollständigkeit halber, veröffentlichen wir aber heute die Liste mit den zehn erfolgreichsten Artikeln aus dem Jahre 2014.

The post Highlights 2014 appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs
Favicon for the web hates me 09:00 Richtig krankmelden » Post from the web hates me Visit off-site link

Leider musste ich mich gestern bei der Arbeit krankmelden. Passiert schon mal, wenn man zwei Kinder hat, die beide in die Kita gehen und so ziemlich alles mit nach Hause bringen, was nach Virus oder Infekt aussieht. Auch wenn es trivial klingt, krankmelden ist doch nicht so einfach, wie man denkt. Ok es ist einfach, aber […]

The post Richtig krankmelden appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs
Favicon for nikic's Blog 02:00 Internal value representation in PHP 7 - Part 2 » Post from nikic's Blog Visit off-site link

In the first part of this article, high level changes in the internal value representation between PHP 5 and PHP 7 were discussed. As a reminder, the main difference was that zvals are no longer individually allocated and don’t store a reference count themselves. Simple values like integers or floats can be stored directly in a zval, while complex values are represented using a pointer to a separate structure.

The additional structures for complex zval values all use a common header, which is defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
            ZEND_ENDIAN_LOHI_3(
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;
};

This header now holds the refcount, the type of the value and cycle collection info (gc_info), as well as a slot for type-specific flags.

In the following the details of the individual complex types will be discussed and compared to the previous implementation in PHP 5. One of the complex types are references, which were already covered in the previous part. Another type that will not be covered here are resources, because I don’t consider them to be interesting.

Strings

PHP 7 represents strings using the zend_string type, which is defined as follows:

struct _zend_string {
    zend_refcounted   gc;
    zend_ulong        h;        /* hash value */
    size_t            len;
    char              val[1];
};

Apart from the refcounted header, a string contains a hash cache h, a length len and a value val. The hash cache is used to avoid recomputing the hash of the string every time it is used to look up a key in a hashtable. On first use it will be initialized to the (non-zero) hash.

If you’re not familiar with the quite extensive lore of dirty C hacks, the definition of val may look strange: It is declared as a char array with a single element - but surely we want to store strings longer than one character? This uses a technique called the “struct hack”: The array is declared with only one element, but when creating the zend_string we’ll allocate it to hold a larger string. We’ll still be able to access the larger string through the val member.

Of course this is technically undefined behavior, because we end up reading and writing past the end of a single-character array, however C compilers know not to mess with your code when you do this. C99 explicitly supports this in the form of “flexible array members”, however thanks to our dear friends at Microsoft, nobody needing cross-platform compatibility can actually use C99.

The new string type has some advantages over using normal C strings: Firstly, it directly embeds the string length. This means that the length of a string no longer needs to be passed around all over the place. Secondly, as the string now has a refcounted header, it is possible to share a string in multiple places without using zvals. This is particularly important for sharing hashtable keys.

The new string type also has one large disadvantage: While it is easy to get a C string from a zend_string (just use str->val) it is not possible to directly get a zend_string from a C string – you need to actually copy the string’s value into a newly allocated zend_string. This is particularly inconvenient when dealing with literal strings (constant strings occurring in the C source code).

There are a number of flags a string can have (which are stored in the GC flags field):

#define IS_STR_PERSISTENT           (1<<0) /* allocated using malloc */
#define IS_STR_INTERNED             (1<<1) /* interned string */
#define IS_STR_PERMANENT            (1<<2) /* interned string surviving request boundary */

Persistent strings use the normal system allocator instead of the Zend memory manager (ZMM) and as such can live longer than one request. Specifying the used allocator as a flag allows us to transparently use persistent strings in zvals, while previously in PHP 5 a copy into the ZMM was required beforehand.

Interned strings are strings that won’t be destroyed until the end of a request and as such don’t need to use refcounting. They are also deduplicated, so if a new interned string is created the engine first checks if an interned string with the given content already exists. All strings that occur literally in PHP source code (this includes string literals, variable and function names, etc) are usually interned. Permanent strings are interned strings that were created before a request starts. While normal interned strings are destroyed on request shutdowns, permanent strings are kept alive.

If opcache is used interned strings will be stored in shared memory (SHM) and as such shared across all PHP worker processes. In this case the notion of permanent strings becomes irrelevant, because interned strings will never be destroyed.

Arrays

I will not talk about the details of the new array implementation here, as this is already covered in a previous article. It’s no longer accurate in some details due to recent changes, but all the concepts are still the same.

There is only one new array-related concept I’ll mention here, because it is not covered in the hashtable post: Immutable arrays. These are essentially the array equivalent of interned strings, in that they don’t use refcounting and always live until the end of the request (or longer).

Due to some memory management concerns, immutable arrays are only used if opcache is enabled. To see what kind of difference this can make, consider the following script:

for ($i = 0; $i < 1000000; ++$i) {
    $array[] = ['foo'];
}
var_dump(memory_get_usage());

With opcache the memory usage is 32 MiB, but without opcache usage rises to a whopping 390 MB, because each element of $array will get a new copy of ['foo'] in this case. The reason an actual copy is done here (instead of a refcount increase) is that literal VM operands don’t use refcounting to avoid SHM corruption. I hope we can improve this currently catastrophic case to work better without opcache in the future.

Objects in PHP 5

Before considering the object implementation in PHP 7, let’s first walk through how things worked in PHP 5 and highlight some of the inefficiencies: The zval itself used to store a zend_object_value, which is defined as follows:

typedef struct _zend_object_value {
    zend_object_handle handle;
    const zend_object_handlers *handlers;
} zend_object_value;

The handle is a unique ID of the object which can be used to look up the object data. The handlers are a VTable of function pointers implementing various behaviors of an object. For “normal” PHP objects this handler table will always be the same, but objects created by PHP extensions can use a custom set of handlers that modifies the way it behaves (e.g. by overloading operators).

The object handle is used as an index into the “object store”, which is an array of object store buckets defined as follows:

typedef struct _zend_object_store_bucket {
    zend_bool destructor_called;
    zend_bool valid;
    zend_uchar apply_count;
    union _store_bucket {
        struct _store_object {
            void *object;
            zend_objects_store_dtor_t dtor;
            zend_objects_free_object_storage_t free_storage;
            zend_objects_store_clone_t clone;
            const zend_object_handlers *handlers;
            zend_uint refcount;
            gc_root_buffer *buffered;
        } obj;
        struct {
            int next;
        } free_list;
    } bucket;
} zend_object_store_bucket;

There’s quite a lot of things going on here. The first three members are just some metadata (whether the destructor of the object was called, whether this bucket is used at all and how many times this object was visited by some recursive algorithm). The following union distinguishes the case where the bucket is currently used or whether it is part of the bucket free list. Important for use is the case where struct _store_object is used:

The first member object is a pointer to the actual object (finally). It is not directly embedded in the object store bucket, because objects have no fixed size. The object pointer is followed by three handlers managing destruction, freeing and cloning. Note that in PHP destruction and freeing of objects are distinct steps, with the former being skipped in some cases (“unclean shutdown”). The clone handler is virtually never used. Because these storage handlers are not part of the normal object handlers (for whatever reason) they will be duplicated for every single object, rather than being shared.

These object store handlers are followed by a pointer to the ordinary object handlers. These are stored in case the object is destroyed without a zval being known (which usually stores the handlers).

The bucket also contains a refcount, which is somewhat odd given how in PHP 5 the zval already stores a reference count. Why do we need another? The problem is that while usually zvals are “copied” simply by increasing their refcount, there are also cases where a hard copy occurs, i.e. an entirely new zval is allocated with the same zend_object_value. In this case two distinct zvals end up using the same object store bucket, so it needs to be refcounted as well. This kind of “double refcounting” is one of the inherent issues of the PHP 5 zval implementation. The buffered pointer into the GC root buffer is also duplicated for similar reasons.

Now let’s look at the actual object that the object store points to. For normal userland objects it is defined as follows:

typedef struct _zend_object {
    zend_class_entry *ce;
    HashTable *properties;
    zval **properties_table;
    HashTable *guards;
} zend_object;

The zend_class_entry is a pointer to the class this object is an instance of. The two following members are used for two different ways of storing object properties. For dynamic properties (i.e. ones that are added at runtime and not declared in the class) the properties hashtable is used, which just maps (mangled) property names to their values.

However for declared properties an optimization is used: During compilation every such property is assigned an index and the value of the property is stored at that index in the properties_table. The mapping between property names and their index is stored in a hashtable in the class entry. As such the memory overhead of the hashtable is avoided for individual objects. Furthermore the index of a property is cached polymorphically at runtime.

The guards hashtable is used to implement the recursion behavior of magic methods like __get, which I won’t go into here.

Apart from the double refcounting issue already previously mentioned, the object representation is also heavy on memory usage with 136 bytes for a minimal object with a single property (not counting zvals). Furthermore there is a lot of indirection involved: For example, to fetch a property on an object zval, you first have to fetch the object store bucket, then the zend object, then the properties table and then the zval it points to. As such there are already four levels of indirection at a minimum (and in practice it will be no fewer than seven).

Objects in PHP 7

PHP 7 tries to improve on all of these issues by getting rid of double refcounting, dropping some of the memory bloat and reducing indirection. Here’s the new zend_object structure:

struct _zend_object {
    zend_refcounted   gc;
    uint32_t          handle;
    zend_class_entry *ce;
    const zend_object_handlers *handlers;
    HashTable        *properties;
    zval              properties_table[1];
};

Note that this structure is now (nearly) all that is left of an object: The zend_object_value has been replaced with a direct pointer to the object and the object store, while not entirely gone, is much less significant.

Apart from now including the customary zend_refcounted header, you can see that the handle and the handlers of the object value have been moved into the zend_object. Furthermore the properties_table now also makes use of the struct hack, so the zend_object and the properties table will be allocated in one chunk. And of course, the property table now directly embeds zvals, instead of containing pointers to them.

The guards table is no longer directly present in the object structure. Instead it will be stored in the first properties_table slot if it is needed, i.e. if the object uses __get etc. But if these magic methods are not used, the guards table is elided.

The dtor, free_storage and clone handlers that were previously stored in the object store bucket have now been moved into the handlers table, which starts as follows:

struct _zend_object_handlers {
    /* offset of real object header (usually zero) */
    int                                     offset;
    /* general object functions */
    zend_object_free_obj_t                  free_obj;
    zend_object_dtor_obj_t                  dtor_obj;
    zend_object_clone_obj_t                 clone_obj;
    /* individual object functions */
    // ... rest is about the same in PHP 5
};

At the top of the handler table is an offset member, which is quite clearly not a handler. This offset has to do with how internal objects are represented: An internal object always embeds the standard zend_object, but typically also adds a number of additional members. In PHP 5 this was done by adding them after the standard object:

struct custom_object {
    zend_object std;
    uint32_t something;
    // ...
};

This means that if you get a zend_object* you can simply cast it to your custom struct custom_object*. This is the standard means of implementing structure inheritance in C. However in PHP 7 there is an issue with this particular approach: Because zend_object uses the struct hack for storing the properties table, PHP will be storing properties past the end of zend_object and thus overwriting additional internal members. This is why in PHP 7 additional members are stored before the standard object instead:

struct custom_object {
    uint32_t something;
    // ...
    zend_object std;
};

However this means that it is no longer possible to directly convert between a zend_object* and a struct custom_object* with a simple cast, because both are separated by a offset. This offset is what’s stored in the first member of the object handler table. At compile-time the offset can be determined using the offsetof() macro.

You may wonder why PHP 7 objects still contain a handle. After all, we now directly store a pointer to the zend_object, so we no longer need the handle to look up the object in the object store.

However the handle is still necessary, because the object store still exists, albeit in a significantly reduced form. It is now a simple array of pointers to objects. When an object is created a pointer to it is inserted into the object store at the handle index and removed once the object is freed.

Why do we still need the object store? The reason behind this is that during request shutdown, there comes a point where it is no longer safe to run userland code, because the executor is already partially shut down. To avoid this PHP will run all object destructors at an early point during shutdown and prevent them from running at a later point in time. For this a list of all active objects is needed.

Furthermore the handle is useful for debugging, because it gives each object a unique ID, so it’s easy to see whether two objects are really the same or just have the some content. HHVM still stores an object handle despite not having a concept of an object store.

Comparing with the PHP 5 implementation, we now have only one refcount (as the zval itself no longer has one) and the memory usage is much smaller: We need 40 bytes for the base object and 16 bytes for every declared property, already including its zval. The amount of indirection is also significantly reduced, as many of the intermediate structure were either dropped or embedded. As such reading a property is now only a single level of indirection, rather than four.

Indirect zvals

At this point we have covered all of the normal zval types, however there are a couple of additional special types that are used only in certain circumstances. One that was newly added in PHP 7 is IS_INDIRECT.

An indirect zval signifies that its value is stored in some other location. Note that this is different from the IS_REFERENCE type in that it directly points to another zval, rather than a zend_reference structure that embeds a zval.

To understand under what circumstances this may be necessary, consider how PHP implements variables (though the same also applies to object property storage):

All variables that are known at compile-time are assigned an index and their value will be stored at that index in the compiled variables (CV) table. However PHP also allows you to dynamically reference variables, either by using variable variables or, if you are in global scope, through $GLOBALS. If such an access occurs, PHP will create a symbol table for the function/script, which contains a map from variable names to their values.

This leads to the question: How can both forms of access be supported at the same time? We need table-based CV access for normal variable fetches and symtable-based access for varvars. In PHP 5 the CV table used doubly-indirected zval** pointers. Normally those pointers would point to a second table of zval* pointer that would point to the actual zvals:

+------ CV_ptr_ptr[0]
| +---- CV_ptr_ptr[1]
| | +-- CV_ptr_ptr[2]
| | |
| | +-> CV_ptr[0] --> some zval
| +---> CV_ptr[1] --> some zval
+-----> CV_ptr[2] --> some zval

Now, once a symbol table came into use, the second table with the single zval* pointers was left unused and the zval** pointers were updated to point into the hashtable buckets instead. Here illustrated assuming the three variables are called $a, $b and $c:

CV_ptr_ptr[0] --> SymbolTable["a"].pDataPtr --> some zval
CV_ptr_ptr[1] --> SymbolTable["b"].pDataPtr --> some zval
CV_ptr_ptr[2] --> SymbolTable["c"].pDataPtr --> some zval

In PHP 7 using the same approach is no longer possible, because a pointer into a hashtable bucket will be invalidated when the hashtable is resized. Instead PHP 7 uses the reverse strategy: For the variables that are stored in the CV table, the symbol hashtable will contain an INDIRECT entry pointing to the CV entry. The CV table will not be reallocated for the lifetime of the symbol table, so there is no problem with invalidated pointers.

So if you have a function with CVs $a, $b and $c, as well as a dynamically created variable $d, the symbol table could looks something like this:

SymbolTable["a"].value = INDIRECT --> CV[0] = LONG 42
SymbolTable["b"].value = INDIRECT --> CV[1] = DOUBLE 42.0
SymbolTable["c"].value = INDIRECT --> CV[2] = STRING --> zend_string("42")
SymbolTable["d"].value = ARRAY --> zend_array([4, 2])

Indirect zvals can also point to an IS_UNDEF zval, in which case it is treated as if the hashtable does not contain the associated key. So if unset($a) writes an UNDEF type into CV[0], then this will be treated like the symbol table no longer having a key "a".

Constants and ASTs

There are two more special types IS_CONSTANT and IS_CONSTANT_AST which exist both in PHP 5 and PHP 7 and deserve a mention here. To understand what these do, consider the following example:

function test($a = ANSWER,
              $b = ANSWER * ANSWER) {
    return $a + $b;
}

define('ANSWER', 42);
var_dump(test()); // int(42 + 42 * 42)

The default values for the parameters of the test() function make use of the constant ANSWER - however this constant is not yet defined when the function was declared. The constant will only be available once the define() call has run.

For this reason parameter and property default values, constants and everything else accepting a “static expression” have the ability to postpone evaluation of the expression until first use.

If the value is a constant (or class constant), which is the most common case for late-evaluation, this is signaled using an IS_CONSTANT zval with the constant name. If the value is an expression, a IS_CONSTANT_AST zval pointing to an abstract syntax tree (AST) is used.

And this concludes our walk through the PHP 7 value representation. Two more topics I’d like to write about at some point are some of the optimizations done in the virtual machine, in particular the new calling convention, as well as the improvements that were made to the compiler infrastructure.

News stories from Thursday 18 June, 2015

Favicon for the web hates me 14:00 Entwurfsmuster » Post from the web hates me Visit off-site link

Entwurfsmuster sind wiederkehrende Strukturen in der Softwareentwicklung. Viele Probleme, vor denen man steht, sind bereits gelöst, warum also nicht ähnliche Dinge ähnlich lösen. Es gibt einige, die muss man kennen, ich gibt viele, die sollte man kennen und auch ein paar, die man immer mal wieder nachschlagen darf. Hier findet ihr von jedem etwas.

The post Entwurfsmuster appeared first on the web hates me.

thewebhatesme?d=yIl2AUoC8zA thewebhatesme?d=qj6IDK7rITs

News stories from Tuesday 09 June, 2015

Favicon for Ramblings of a web guy 01:45 Apple Says My Screen Is Third Party » Post from Ramblings of a web guy Visit off-site link
I have always had the utmost respect for Apple. Even before I used Macs and before the iPhone came out, I knew they were a top notch company.

I have had five iPhones. I have had 6 or 7 MacBook Pros. My kids have Macs. My kids have iPhones. My parents use iPads. I think a lot of Apple products and service... until today.

We took my daughter's hand me down iPhone 5 in to have the ear piece and top button fixed. It's been in the family the whole time. It was never owned by anyone other than family. Last year, I took it in for the Apple Store Battery Replacement Program. That is the last time anyone had it open. In fact, that may have been the last time it was out of its case. More on this later.

After we dropped off the phone today, we were told it was going to be an hour. No problem, we could kill some time. We came back an hour later and the person brought us the phone out and tells us that they refused to work on it because the screen is a 3rd party part. Whoa! What? I tell her that the only place it was ever worked on was in that exact store. She goes to get a manager. I thought, OK, the Apple customer service I know and love is about to kick in. They are going to realize their mistake and this will all be good. Or, even if they still think it's a 3rd party screen, he will come up with some resolution for the problem. Um, no.

He says the same thing (almost verbatim) to me that the previous person said. I again tell him it has only been opened by them. He offers to take it to the back and have a technician open it up again. He was not really gone long enough for that. He comes back, points at some things on the screen and tells me that is how they know it's a 3rd party part. I again, tell him that only the Apple Store has had it open. His response is a carefully crafted piece of technicality that can only come from lawyers and businessmen. It was along the lines of "At some point, this screen has been replaced with a 3rd party screen. I am not saying you are lying. I am not claiming to know how it was replaced. I am only stating that this is a 3rd party screen." What?

So, OK, what now? I mean, it wasn't under warranty. I did not expect to get a new free phone. I was going to pay to have it fixed. Nope. They won't touch it with a ten foot pole. It has a 3rd party part on it. He claims, that because they base their repair fees on being able to refurbish and reuse the parts they pull off of the phone (the phone I own and paid for by the way), they can't offer to repair a phone with parts they can't refurbish. I can't even pay full price, whatever that is. He never gave me a price to pay for a new screen with no discounts.

At this point, I realized I needed to leave. I was so furious. I was furious it was happening. I was furious that the manager had no solution for me. I was furious that he was speaking in legalese.

Just to be clear, I could buy my daughter a new iPhone 6. I am not trying to get something for nothing. I just wanted the phone to work again. One of the things I love about Apple products is how well they hold up. Sure, you have to have some work done on them sometimes. Batteries go bad. Buttons quit working. But, let's be real. My daughter uses this thing for hours a day. I have the data bill to prove it. So, I like that I can have an Apple product repaired when it breaks and it gets a longer life. The alternative is to throw it away.

How did I end up here? I can only come up with one scenario. And the thought that this is what happened upsets me even more. When we took it for the battery replacement last year, they kept it longer than their initial estimate. And the store was dead that day. When they brought it out, the case would not fit on the bottom of the phone. It was like the screen was not on all the way. The person took it back to the back again. They came out later and it seemed to work fine. And I was fine with all of this because it's Apple. I trust(ed) Apple. But, what if, they broke the screen? What if the tech that broke it was used a screen from some returned phone that did have a third party part and no one caught it? Or what if, Apple was knowingly using third party parts?

If I had not just had the battery replaced last year, I would think maybe there was some shenanigans in the shipping when the phone was new. We bought this phone brand new when the iPhone 5 came out. It would not come as a surprise if some devices had been intercepted and taken apart along the shipping lines. Or even in production. But, we just had it serviced at the Apple Store last year. They had no problem with the screen then other than the one they caused when they had to put it back together a second time.

This all sounds too far fetched right? Sadly, there seems to be a trend of Apple denying service to people. All of these people can't be lying. They can't all be out to get one over on Apple.



While waiting for our appointment, I overheard an Apple Genius telling a woman she "may" have had water damage. She didn't tell her she did. She did not claim the woman was lying. She thought she "may" have water damage. I don't know if she did or not. What struck me was the way she told her she "thought it could be" water damage. She told her she had seen lots of bad screens, but none of them (really? not one single screen?) had vertical lines in it like this. It's like she was setting her up to come back later and say "Darn, the tech says it is water damage." Sadly, I find myself doubting that conversation now. It makes me want to take a phone in with horizontal lines and see if I get the same story.

Of course, I know what many, many people will say to this. You will say that if I am really this upset, I should not buy anymore Apple products. And you are right. That is the American way. The free market is the way to get to companies. The thing is, if I bought a Samsung Galaxy, where would I get it fixed? Would my experience be any better? There is not Samsung store. There are no Authorized Samsung repair facilities. So, what would that get me? A disposable phone? Maybe that is what Apple wants. Maybe that is their goal. Deny service to people in hopes it will lead to more sales and less long term use of their devices.

And you know what makes this all even more crappy? One of the reasons he says he knows it is a third party screen is that the home button is lose. It wasn't lose when we brought it in! I was using the phone myself to make sure a back up was done just before we handed it over to the Apple Store. They did that when they opened the screen and decided it was a third pary part. So, now, my daughter's phone not only has no working ear piece and a top button that works only some of the time. Now, her home button spins around. Sigh.

News stories from Friday 22 May, 2015

Favicon for Doctrine Project 02:00 Doctrine MongoDB ODM release 1.0.0-BETA13 » Post from Doctrine Project Visit off-site link

Doctrine MongoDB ODM release 1.0.0-BETA13

We are happy to announce the immediate availability of Doctrine MongoDB ODM 1.0.0-BETA13.

What is new in 1.0.0-BETA13?

All issues and pull requests in this release may be found under the 1.0.0-BETA13 milestone on github. Here is the highlight of most important features:

atomicSet and atomicSetArray strategies for top-level collections

#1096 introduces two new collection update strategies, atomicSet and atomicSetArray. Unlike existing strategies (e.g. pushAll and set), which update collections in a separate query after the parent document, the atomic strategy ensures that the collection and its parent are updated in the same query. Any nested collections (within embedded documents) will also be included in the atomic update, irrespective of their update strategies.

Currently, atomic strategies may only be specified for collections mapped directly in a document class (i.e. not collections within embedded documents). This strategy may be especially useful for highly concurrent applications and/or versioned document classes (see: #1094).

Reference priming improvements

#1068 moves the handling of primed references to the Cursor object, which allows ODM to take the skip and limit options into account and avoid priming more references than are necessary.

#970 now allows references within embedded documents to be primed by fixing a previous parsing limitation with dot syntax in field names.

New defaultDiscriminatorValue mapping

#1072 introduces a defaultDiscriminatorValue mapping, which may be used to specify a default discriminator value if a document or association has no discriminator set.

New Integer and Bool annotation aliases

#1073 introduces Integer and Bool annotations, which are aliases of Int and Boolean, respectively.

Add millisecond precision to DateType

#1063 adds millisecond precision to ODM’s DateType class (note: although PHP supports microsecond precision, dates in MongoDB are limited to millisecond precision). This should now allow ODM to roundtrip dates from the database without a loss of precision.

New Hydrator generation modes

Previously, the autoGenerateHydratorClasses ODM configuration option was a boolean denoting whether to always or never create Hydrator classes. As of #953, this option now supports four modes:

  • AUTOGENERATE_NEVER = 0 (same as false)
  • AUTOGENERATE_ALWAYS = 1 (same as true)
  • AUTOGENERATE_FILE_NOT_EXISTS = 2
  • AUTOGENERATE_EVAL = 3

Support for custom DocumentRepository factory

#892 allows users to define a custom repository class via the defaultRepositoryClassName configuration option. Alternatively, a custom factory class may also be configured, which allows users complete control over how repository classes are instantiated.

Custom repository and factory classes must implement Doctrine\Common\Persistence\ObjectRepository and Doctrine\ODM\MongoDB\Repository\RepositoryFactory, respectively.

Stay tuned, there is a lot more to come soon!

News stories from Monday 18 May, 2015

Favicon for ircmaxell's blog 16:30 Prefix Trees and Parsers » Post from ircmaxell's blog Visit off-site link
In my last post, Tries and Lexers, I talked about an experiment I was doing related to parsing of JavaScript code. By the end of the post I had shifted to wanting to build a HTTP router using the techniques that I learned. Let's continue where we left off...

Read more »
Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=GnQFEoQx1o4:W8LxnGFTyMw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 15 May, 2015

Favicon for ircmaxell's blog 18:00 Tries and Lexers » Post from ircmaxell's blog Visit off-site link
Lately I have been playing around with a few experimental projects. The current one started when I tried to make a templating engine. Not just an ordinary one, but one that understood the context of a variable so it could encode/escape it properly. Imagine being able to put a variable in a JavaScript string in your template, and have the engine transparently encode it correctly for you. Awesome, right? Well, while doing it, I went down a rabbit hole. And it led to something far more awesome.

Read more »
Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=-fCXjJ57qVk:S2p_vv9sWj8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 05 May, 2015

Favicon for Doctrine Project 02:00 DoctrineORMModule release 0.9.0 » Post from Doctrine Project Visit off-site link

DoctrineORMModule release 0.9.0

The Zend Framework Integration Team is happy to announce the new release of DoctrineORMModule.

DoctrineORMModule 0.9.0 is out of the door!!

Note that this is the last version that supports doctrine/migrations. We are working on extracting this feature into an independent module.

Follow issue #401.

The Following issues were solved in this release:

To install this version, simply update your composer.json:

{
    "require": {
        "doctrine/doctrine-orm-module": "0.9.0"
    }
}
Favicon for nikic's Blog 02:00 Internal value representation in PHP 7 - Part 1 » Post from nikic's Blog Visit off-site link

My last article described the improvements to the hashtable implementation that were introduced in PHP 7. This followup will take a look at the new representation of PHP values in general.

Due to the amount of material to cover, the article is split in two parts: This part will describe how the zval (Zend value) implementation differs between PHP 5 and PHP 7, and also discuss the implementation of references. The second part will investigate the realization of individual types like strings or objects in more detail.

Zvals in PHP 5

In PHP 5 the zval struct is defined as follows:

typedef struct _zval_struct {
    zvalue_value value;
    zend_uint refcount__gc;
    zend_uchar type;
    zend_uchar is_ref__gc;
} zval;

As you can see, a zval consists of a value, a type and some additional __gc information, which we’ll talk about in a moment. The value member is a union of different possible values that a zval can store:

typedef union _zvalue_value {
    long lval;                 // For booleans, integers and resources
    double dval;               // For floating point numbers
    struct {                   // For strings
        char *val;
        int len;
    } str;
    HashTable *ht;             // For arrays
    zend_object_value obj;     // For objects
    zend_ast *ast;             // For constant expressions
} zvalue_value;

A C union is a structure in which only one member can be active at a time and those size matches the size of its largest member. All members of the union will be stored in the same place in memory and will be interpreted differently depending on which one you access. If you read the lval member of the above union, its value will be interpreted as a signed integer. If you read the dval member the value will be interpreted as a double-precision floating point number instead. And so on.

To figure out which of these union members is currently in use, the type property of a zval stores a type tag, which is simply an integer:

#define IS_NULL     0      /* Doesn't use value */
#define IS_LONG     1      /* Uses lval */
#define IS_DOUBLE   2      /* Uses dval */
#define IS_BOOL     3      /* Uses lval with values 0 and 1 */
#define IS_ARRAY    4      /* Uses ht */
#define IS_OBJECT   5      /* Uses obj */
#define IS_STRING   6      /* Uses str */
#define IS_RESOURCE 7      /* Uses lval, which is the resource ID */
/* Special types used for late-binding of constants */
#define IS_CONSTANT 8
#define IS_CONSTANT_AST 9

Reference counting in PHP 5

Zvals in PHP 5 are (with a few exceptions) allocated on the heap and PHP needs some way to keep track which zvals are currently in use and which should be freed. For this purpose reference counting is employed: The refcount__gc member of the zval structure stores how often a zval is currently “referenced”. For example in $a = $b = 42 the value 42 is referenced by two variables, so its refcount is 2. If the refcount reaches zero, it means a value is unused and can be freed.

Note that the references that the refcount refers to (how many times a value is currently used) have nothing to do with PHP references (using &). I will always using the terms “reference” and “PHP reference” to disambiguate both concepts in the following. For now we’ll ignore PHP references altogether.

A concept that is closely related to reference counting is “copy on write”: A zval can only be shared between multiple users as long as it isn’t modified. In order to change a shared zval it needs to be duplicated (“separated”) and the modification will happen only on the duplicated zval.

Lets look at an example that shows off both copy-on-write and zval destruction:

$a = 42;   // $a         -> zval_1(type=IS_LONG, value=42, refcount=1)
$b = $a;   // $a, $b     -> zval_1(type=IS_LONG, value=42, refcount=2)
$c = $b;   // $a, $b, $c -> zval_1(type=IS_LONG, value=42, refcount=3)

// The following line causes a zval separation
$a += 1;   // $b, $c -> zval_1(type=IS_LONG, value=42, refcount=2)
           // $a     -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($b); // $c -> zval_1(type=IS_LONG, value=42, refcount=1)
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

unset($c); // zval_1 is destroyed, because refcount=0
           // $a -> zval_2(type=IS_LONG, value=43, refcount=1)

Reference counting has one fatal flaw: It is not able to detect and release cyclic references. To handle this PHP uses an additional cycle collector. Whenever the refcount of a zval is decremented and there is a chance that this zval is part of a cycle, the zval is written into a “root buffer”. Once this root buffer is full, potential cycles will be collected using a mark and sweep garbage collection.

In order to support this additional cycle collector, the actually used zval structure is the following:

typedef struct _zval_gc_info {
    zval z;
    union {
        gc_root_buffer       *buffered;
        struct _zval_gc_info *next;
    } u;
} zval_gc_info;

The zval_gc_info structure embeds the normal zval, as well as one additional pointer - note that u is a union, so this is really just one pointer with two different types it may point to. The buffered pointer is used to store where in the root buffer this zval is referenced, so that it may be removed from it if it’s destroyed before the cycle collector runs (which is very likely). next is used when the collector destroys values, but I won’t go into that here.

Motivation for change

Let’s talk about sizes a bit (all sizes are for 64-bit systems): First of all, the zvalue_value union is 16 bytes large, because both the str and obj members have that size. The whole zval struct is 24 bytes (due to padding) and zval_gc_info is 32 bytes. On top of this, allocating the zval on the heap adds another 16 bytes of allocation overhead. So we end up using 48 bytes per zval - although this zval may be used by multiple places.

At this point we can start thinking about the (many) ways in which this zval implementation is inefficient. Consider the simple case of a zval storing an integer, which by itself is 8 bytes. Additionally the type-tag needs to be stored in any case, which is a single byte by itself, but due to padding needs another 8 bytes.

To these 16 bytes that we really “need” (in first approximation), we add another 16 bytes handling reference counting and cycle collection and another 16 bytes of allocation overhead. Not to mention that we actually have to perform that allocation and the subsequent free, both being quite expensive operations.

This raises the question: Does a simple integer value really need to be stored as a reference-counted, cycle-collectible, heap-allocated value? The answer to this question is of course, no, this doesn’t make sense.

Here is a summary of the primary problems with the PHP 5 zval implementation:

  • Zvals (nearly) always require a heap allocation.
  • Zvals are always reference counted and always have cycle collection information, even in cases where sharing the value is not worthwhile (an integer) and it can’t form cycles.
  • Directly refcounting the zvals leads to double refcounting in the case of objects and resources. The reasons behind this will be explained in the next part.
  • Some cases involve quite an awesome amount of indirection. For example to access the object stored in a variable, a total of four pointers need to be dereferenced (which means following a pointer chain of length four). Once again this will be discussed in the next part.
  • Directly refcounting the zvals also means that values can only be shared between zvals. For example it’s not possible to share a string between a zval and hashtable key (without storing the hashtable key as a zval as well).

Zvals in PHP 7

And this brings us to the new zval implementation in PHP 7. The fundamental change that was implemented, is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, arrays or objects) will store the refcount themselves. This has the following advantages:

  • Simple values do not require allocation and don’t use refcounting.
  • There is no more double refcounting. In the object case, only the refcount in the object is used now.
  • Because the refcount is now stored in the value itself, the value can be shared independently of the zval structure. A string can be used both in a zval and a hashtable key.
  • There is a lot less indirection, i.e. the number of pointers you need to follow to get to a value is lower.

Now lets take a look at how the new zval is defined:

struct _zval_struct {
    zend_value value;
    union {
        struct {
            ZEND_ENDIAN_LOHI_4(
                zend_uchar type,
                zend_uchar type_flags,
                zend_uchar const_flags,
                zend_uchar reserved)
        } v;
        uint32_t type_info;
    } u1;
    union {
        uint32_t var_flags;
        uint32_t next;                 // hash collision chain
        uint32_t cache_slot;           // literal cache slot
        uint32_t lineno;               // line number (for ast nodes)
        uint32_t num_args;             // arguments number for EX(This)
        uint32_t fe_pos;               // foreach position
        uint32_t fe_iter_idx;          // foreach iterator index
    } u2;
};

The first member stays pretty similar, this is still a value union. The second member is an integer storing type information, which is further subdivided into individual bytes using a union (you can ignore the ZEND_ENDIAN_LOHI_4 macro, which just ensures a consistent layout across platforms with different endianness). The important parts of this substructure are the type (which is similar to what it was before) and the type_flags, which I’ll explain in a moment.

At this point there exists a small problem: The value member is 8 bytes large and due to struct padding adding even a single byte to that grows the zval size to 16 bytes. However we obviously don’t need 8 bytes just to store a type. This is why the zval contains the additional u2 union, which remains unused by default, but can be repurposed by the surrounding code to store 4 bytes of data. The different union members correspond to different usages of this extra data slot.

The value union looks slightly different in PHP 7:

typedef union _zend_value {
    zend_long         lval;
    double            dval;
    zend_refcounted  *counted;
    zend_string      *str;
    zend_array       *arr;
    zend_object      *obj;
    zend_resource    *res;
    zend_reference   *ref;
    zend_ast_ref     *ast;

    // Ignore these for now, they are special
    zval             *zv;
    void             *ptr;
    zend_class_entry *ce;
    zend_function    *func;
    struct {
        ZEND_ENDIAN_LOHI(
            uint32_t w1,
            uint32_t w2)
    } ww;
} zend_value;

First of all, note that the value union is now 8 bytes instead of 16. It will only store integers (lval) and doubles (dval) directly, everything else is a pointer. All the pointer types (apart from those marked as special above) use refcounting and have a common header defined by zend_refcounted:

struct _zend_refcounted {
    uint32_t refcount;
    union {
        struct {
            ZEND_ENDIAN_LOHI_3(
                zend_uchar    type,
                zend_uchar    flags,
                uint16_t      gc_info)
        } v;
        uint32_t type_info;
    } u;
};

Of course the structure contains a refcount. Additionally it contains a type, some flags and gc_info. The type just duplicates the zval type and allows the GC to distinguish different refcounted structures without storing a zval. The flags are used for different purposes with different types and will be explained for each type separately in the next part.

The gc_info is the equivalent of the buffered entry in the old zvals. However instead of storing a pointer into the root buffer it now contains an index into it. Because the root buffer has a fixed size (10000 elements) it is enough to use a 16 bit number for this instead of a 64 bit pointer. The gc_info info also encodes the “color” of the node, which is used to mark nodes during collection.

Zval memory management

I’ve mentioned that zvals are no longer individually heap-allocated. However they obviously still need to be stored somewhere, so how does this work? While zvals are still mostly part of heap-allocated structures, they are directly embedded into them. E.g. a hashtable bucket will directly embed a zval instead of storing a pointer to a separate zval. The compiled variables table of a function or the property table of an object will be zval arrays that are allocated in one chunk, instead of storing pointers to separate zvals. As such zvals are now usually stored with one level of indirection less. What was previously a zval* is now a zval.

When a zval is used in a new place, previously this meant copying a zval* and incrementing its refcount. Now it means copying the contents of a zval (ignoring u2) instead and maybe incrementing the refcount of the value it points to, if said value uses refcounting.

How does PHP know whether a value is refcounted? This cannot be determined solely based on the type, because some types like strings and arrays are not always refcounted. Instead one bit of the zvals type_info member determines whether or not the zval is refcounted. There are a number of other bits encoding properties of the type:

#define IS_TYPE_CONSTANT            (1<<0)   /* special */
#define IS_TYPE_IMMUTABLE           (1<<1)   /* special */
#define IS_TYPE_REFCOUNTED          (1<<2)
#define IS_TYPE_COLLECTABLE         (1<<3)
#define IS_TYPE_COPYABLE            (1<<4)
#define IS_TYPE_SYMBOLTABLE         (1<<5)   /* special */

The three primary properties a type can have are “refcounted”, “collectable” and “copyable”. You already know what refcounted means. Collectable means that the zval can participate in a cycle. E.g. strings are (often) refcounted, but there’s no way you can create a cycle with a string in it.

Copyability determines whether the value needs to copied when a “duplication” is performed. A duplication is a hard copy, e.g. if you duplicate a zval that points to an array, this will not simply increase the refcount on the array. Instead a new and independent copy of the array will be created. However for some types like objects and resources even a duplication should only increment the refcount - such types are called non-copyable. This matches the passing semantics of objects and resources (which are, for the record, not passed by reference).

The following table shows the different types and what type flags they use. “Simple types” refers to types like integers or booleans that don’t use a pointer to a separate structure. A column for the “immutable” flag is also present, which is used to mark immutable arrays and will be discussed in more detail in the next part.

                | refcounted | collectable | copyable | immutable
----------------+------------+-------------+----------+----------
simple types    |            |             |          |
string          |      x     |             |     x    |
interned string |            |             |          |
array           |      x     |      x      |     x    |
immutable array |            |             |          |     x
object          |      x     |      x      |          |
resource        |      x     |             |          |
reference       |      x     |             |          |

At this point, lets take a look at two examples of how the zval management works in practice. First, an example using integers based off the PHP 5 example from above:

$a = 42;   // $a = zval_1(type=IS_LONG, value=42)

$b = $a;   // $a = zval_1(type=IS_LONG, value=42)
           // $b = zval_2(type=IS_LONG, value=42)

$a += 1;   // $a = zval_1(type=IS_LONG, value=43)
           // $b = zval_2(type=IS_LONG, value=42)

unset($a); // $a = zval_1(type=IS_UNDEF)
           // $b = zval_2(type=IS_LONG, value=42)

This is pretty boring. As integers are no longer shared, both variables will use separate zvals. Don’t forget that these are now embedded rather than allocated, which I try to signify by writing = instead of a -> pointer. Unsetting a variable will set the type of the corresponding zval to IS_UNDEF. Now consider a more interesting case where a complex value is involved:

$a = [];   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

$b = $a;   // $a = zval_1(type=IS_ARRAY) -> zend_array_1(refcount=2, value=[])
           // $b = zval_2(type=IS_ARRAY) ---^

// Zval separation occurs here
$a[] = 1   // $a = zval_1(type=IS_ARRAY) -> zend_array_2(refcount=1, value=[1])
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

unset($a); // $a = zval_1(type=IS_UNDEF) and zend_array_2 is destroyed
           // $b = zval_2(type=IS_ARRAY) -> zend_array_1(refcount=1, value=[])

Here each variable still has a separate (embedded) zval, but both zvals point to the same (refcounted) zend_array structure. Once a modification is done the array needs to be duplicated. This case is similar to how things work in PHP 5.

Types

Lets take a closer look at what types are supported in PHP 7:

// regular data types
#define IS_UNDEF                    0
#define IS_NULL                     1
#define IS_FALSE                    2
#define IS_TRUE                     3
#define IS_LONG                     4
#define IS_DOUBLE                   5
#define IS_STRING                   6
#define IS_ARRAY                    7
#define IS_OBJECT                   8
#define IS_RESOURCE                 9
#define IS_REFERENCE                10

// constant expressions
#define IS_CONSTANT                 11
#define IS_CONSTANT_AST             12

// internal types
#define IS_INDIRECT                 15
#define IS_PTR                      17

This list is quite similar to what was used in PHP 5, however there are a few additions:

  • The IS_UNDEF type is used in places where previously a NULL zval pointer (not to be confused with an IS_NULL zval) was used. For example, in the refcounting examples above the IS_UNDEF type is set for variables that have been unset.
  • The IS_BOOL type has been split into IS_FALSE and IS_TRUE. As such the value of the boolean is now encoded in the type, which allows the optimization of a number of type-based checks. This change is transparent to userland, where this is still a single “boolean” type.
  • PHP references no longer use an is_ref flag on the zval and use a new IS_REFERENCE type instead. How this works will be described in the next section.
  • The IS_INDIRECT and IS_PTR types are special internal types.

The IS_LONG type now uses a zend_long value instead of an ordinary C long. The reason behind this is that on 64-bit Windows (LLP64) a long is only 32-bit wide, so PHP 5 ended up always using 32-bit numbers on Windows. PHP 7 will allow you to use 64-bit numbers if you’re on an 64-bit operating system, even if that operating system is Windows.

Details of the individual zend_refcounted types will be discussed in the next part. For now we’ll only look at the implementation of PHP references.

References

PHP 7 uses an entirely different approach to handling PHP & references than PHP 5 (and I can tell you that this change is one of the largest source of bugs in PHP 7). Lets start by taking a look at how PHP references used to work in PHP 5:

Normally, the copy-on-write principle says that before modifying a zval it needs to be separated, in order to make sure you don’t end up changing the value for every place sharing the zval. This matches by-value passing semantics.

For PHP references this does not apply. If a value is a PHP reference, you want it to change for every user of the value. The is_ref flag that was part of PHP 5 zvals determined whether a value is a PHP reference and as such whether it required separation before modification. An example:

$a = [];  // $a     -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b =& $a; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[])

$b[] = 1; // $a = $b = zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_1(value=[1])
          // Due to the is_ref=1 PHP will *not* separate the zval

One significant problem with this design is that it’s not possible to share a value between a variable that’s a PHP reference and one that isn’t. Consider the following example:

$a = [];  // $a         -> zval_1(type=IS_ARRAY, refcount=1, is_ref=0) -> HashTable_1(value=[])
$b = $a;  // $a, $b     -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
$c = $b   // $a, $b, $c -> zval_1(type=IS_ARRAY, refcount=3, is_ref=0) -> HashTable_1(value=[])

$d =& $c; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[])
          // $d is a reference of $c, but *not* of $a and $b, so the zval needs to be copied
          // here. Now we have the same zval once with is_ref=0 and once with is_ref=1.

$d[] = 1; // $a, $b -> zval_1(type=IS_ARRAY, refcount=2, is_ref=0) -> HashTable_1(value=[])
          // $c, $d -> zval_1(type=IS_ARRAY, refcount=2, is_ref=1) -> HashTable_2(value=[1])
          // Because there are two separate zvals $d[] = 1 does not modify $a and $b.

This behavior of references is one of the reasons why using references in PHP will usually end up being slower than using normal values. To give a less-contrived example where this is a problem:

$array = range(0, 1000000);
$ref =& $array;
var_dump(count($array)); // <-- separation occurs here

Because count() accepts its value by-value, but $array is a PHP reference, a full copy of the array is done before passing it off to count(). If $array weren’t a reference, the value would be shared instead.

Now, let’s switch to the PHP 7 implementation of PHP references. Because zvals are no longer individually allocated, it is not possible to use the same approach that PHP 5 used. Instead a new IS_REFERENCE type is added, which uses the zend_reference structure as its value:

struct _zend_reference {
    zend_refcounted   gc;
    zval              val;
};

So essentially a zend_reference is simply a refcounted zval. All variables in a reference set will store a zval with type IS_REFERENCE pointing to the same zend_reference instance. The val zval behaves like any other zval, in particular it is possible to share a complex value it points to. E.g. an array can be shared between a variable that is a reference and another that is a value.

Lets go through the above code samples again, this time looking at the PHP 7 semantics. For the sake of brevity I will stop writing the individual zvals of the variables and only show what structure they point to.

$a = [];  // $a                                     -> zend_array_1(refcount=1, value=[])
$b =& $a; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[])

$b[] = 1; // $a, $b -> zend_reference_1(refcount=2) -> zend_array_1(refcount=1, value=[1])

The by-reference assignment created a new zend_reference. Note that the refcount is 2 on the reference (because two variables are part of the PHP reference set), but the value itself only has a refcount of 1 (because one zend_reference structure points to it). Now consider the case where references and non-references are mixed:

$a = [];  // $a         -> zend_array_1(refcount=1, value=[])
$b = $a;  // $a, $b,    -> zend_array_1(refcount=2, value=[])
$c = $b   // $a, $b, $c -> zend_array_1(refcount=3, value=[])

$d =& $c; // $a, $b                                 -> zend_array_1(refcount=3, value=[])
          // $c, $d -> zend_reference_1(refcount=2) ---^
          // Note that all variables share the same zend_array, even though some are
          // PHP references and some aren't.

$d[] = 1; // $a, $b                                 -> zend_array_1(refcount=2, value=[])
          // $c, $d -> zend_reference_1(refcount=2) -> zend_array_2(refcount=1, value=[1])
          // Only at this point, once an assignment occurs, the zend_array is duplicated.

The important difference to PHP 5 is that all variables were able to share the same array, even though some were PHP references and some weren’t. Only once some kind of modification is performed the array will be separated. This means that in PHP 7 it’s safe to pass a large, referenced array to count(), it is not going to be duplicated. References will still be slower than normal values, because they require allocation of the zend_reference structure (and indirection through it) and are usually not handled in the fast-path of engine code.

Wrapping up

To summarize, the primary change that was implemented in PHP 7 is that zvals are no longer individually heap-allocated and no longer store a refcount themselves. Instead any complex values they may point to (like strings, array or objects) will store the refcount themselves. This usually leads to less allocations, less indirection and less memory usage.

In the second part of this article the remaining complex types will be discussed.

News stories from Thursday 16 April, 2015

News stories from Wednesday 15 April, 2015

Favicon for Doctrine Project 02:00 Cache 1.4.1 Released » Post from Doctrine Project Visit off-site link

Cache 1.4.1 Released

We are happy to announce the immediate availability of Doctrine Cache 1.4.1.

This release fixes a series of bugs related with null, false or truncated data in the SQLite3 and Memcache adapters (#62, #65, #67).

Improvements have been made to reduce the SQLite3 cache adapter memory usage (#64).

If you use an opcode cache such as OPCache (available since PHP 5.5), you will get major performance improvements in read operations in the PhpFileCache, which shouldn’t cause any stat calls at all now (#69).

Multi-get support was built into the Redis adapter (#60).

A new VoidCache adapter has been introduced - useful for testing (#61).

You can find the complete changelog for this release in the release notes.

You can install the Cache component using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/cache": "1.4.1"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

Favicon for Fabien Potencier 00:00 Blackfire, a new Profiler for PHP Developers » Post from Fabien Potencier Visit off-site link

blackfire_primary_square.png

I've always been fascinated by debugging tools; tools that help you understand what's going on in your code. In the Symfony world, the web debug toolbar and the web profiler are tools that gives a lot of information about HTTP request/response pairs (from exceptions to logs, submitted forms and even an event timeline), but it's only available in development mode as enabling those features in production would have a too significant performance impact. The Symfony profiler is also more about giving metadata about the code execution and less about what is executed.

If you want to understand which part of your code is executed for any given request, and where the server resources are spent, you need special tools; tools that instrument your code at the C level. The oldest tool able to do that is XDebug and a few years ago, Facebook also open-sourced XHProf. Both XDebug (as a profiler) and XHProf are profilers; they are able to answer a lot of questions you might have about the performance of your code, and they can help you understand why your code is slow.

But even if tools are available, performance monitoring in the PHP world is not that widespread. You are probably writing unit tests for your applications to ensure that you don't accidentally deploy broken features and to avoid regressions when you are fixing bugs. But what about performance? A broken page is a problem, but what about a page that takes seconds to display? Less performance means less business. So, continuously testing the performance of your applications should be a critical part of your development workflow.

Enter Blackfire. Blackfire is a PHP profiler that simplifies the profiling of an app as much as possible.

The first big difference with existing tools is the installation process; we've made it straightforward by providing easy-to-follow instructions for a lot of different platforms and Blackfire is even included by default on some major PHP cloud providers.

Once installed, profiling an HTTP request is as easy as it can get: use the Google Chrome extension to profile web pages from your browser, or use the command line tool to profile web services, APIs, PHP CLI scripts, or even long-running scripts like daemons or workers.

The other major difference with the other existing tools comes from the fact that Blackfire is a SaaS product. It let us do a lot of things that would not be possible otherwise like storing the history of your profiles, making comparisons between two profiles really easy or providing a rich and interactive UI that evolves on a day-to-day basis.

If you've used XHProf in the past, you might wonder if it would make sense for you to upgrade to Blackfire. First, and unlike a popular belief, the current Blackfire PHP extension is not based on the XHProf code anymore. Starting from scratch helped us lower the overhead and structure the code for extensibility.

Then, and besides the "better experience", Blackfire offers some unique features like:

  • Profile your applications without changing a single line of code;
  • Easily focus on code you need to optimize thanks to more accurate results, aggregation, and smart cleaning of data;
  • More information about CPU time and I/O time;
  • No performance impact on the production servers when not using the profiler;
  • SQL statements and HTTP calls extraction;
  • Team profiling;
  • Profile sharing
  • an API;
  • Garbage collector information;
  • The soon-to-be-announced Windows support;
  • And much more...

We are very active on our blog where you can learn more about the great features we are providing for developers and companies.

Blackfire has been in public beta for four months now and the response has been amazing so far. More than 20.000 developers have already signed up. You can read some user feedback on our Twitter account, and some of them even wrote about their experience on the Blackfire blog: I recommend the article from ownCloud as they did a lot of performance tweaks to make their code run faster thanks to Blackfire.

My mission with Blackfire is to give developers the best possible profiler for their applications. Try it out today for free and tell me what you think!

News stories from Tuesday 14 April, 2015

Favicon for Doctrine Project 02:00 Doctrine Annotations 1.2.4 Release » Post from Doctrine Project Visit off-site link

Doctrine Annotations 1.2.4 Release

We are happy to announce the immediate availability of Doctrine Collections 1.2.4.

This release fixes a minor issue (#51) with highly concurrent I/O and the FileCacheReader#saveCacheFile() method.

Installation

You can install this version of Doctrine Annotations by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/annotations": "1.2.4"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

Favicon for Doctrine Project 02:00 Doctrine Collections 1.3.0 Release » Post from Doctrine Project Visit off-site link

Doctrine Collections 1.3.0 Release

We are happy to announce the immediate availability of Doctrine Collections 1.3.0.

Installation

You can install this version of Doctrine Common by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/collections": "1.3.0"
    }
}

Changes since 1.3.0

This is a list of issues solved in 1.3.0 since 1.2.0:

  • [26]: Explicit casting of first and max results in the criteria API
  • [30]: typo fixes
  • [31]: CS fixes and tidy up of the tests
  • [36]: Tidy up and CS fixes
  • [42]: small style changes to comply with PSR2
  • [47]: Added build status badge
  • [49]: Keep keys when using ArrayCollection#matching()
  • [52]: Made AbstractLazyCollection#$initialized protected for extensibility.
  • [56]: travis: PHP 7.0 nightly added + few improvements

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Thursday 02 April, 2015

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.0 Release » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.0 Release

We are happy to announce the immediate availability of Doctrine ORM 2.5.0.

This release spans over almost 2 years of development, and is a major effort by the team and the community to make the ORM more robust and performant.

457 issues were resolved in this release, so we are very proud of the work being done by the community and the core team.

What is new in 2.5.x?

Doctrine ORM 2.5.0 comes with a set of major improvements:

  • The Second-level Cache, a component that greatly improves ORM performance
  • Embeddable classes, allowing for a more fine-grained design of your entities without having to resort to one-to-one associations for Value Objects
  • Entity type specific event listeners, for improved event handling performance
  • Improvements in the Criteria Collection filtering API, now also supporting EXTRA_LAZY filtering

What has to be done to upgrade to 2.5.x?

Some backwards incompatible changes were also involved in this release: to read them, along with a more extensive list of the 2.5.0 changes, please consult the upgrade notes.

Stability

We currently do not have a release schedule for Doctrine ORM 2.6.0.

As of today, Doctrine ORM 2.5.x is our stable distribution, and will receive regular bugfix releases.

Doctrine ORM 2.4.8 will be the last bugfix release for the 2.4.x series. Further releases will only occur in the eventuality of a security issue being discovered.

We will also keep patching previous versions of the ORM in the eventuality of a security issue being discovered.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

Changes since 2.4.0

This is a list of issues resolved in 2.5.0 since 2.4.0:

New Feature

  • DDC-93 - It would be nice if we could have support for ValueObjects
  • DDC-1149 - Optimize OneToMany and ManyToMany without join
  • DDC-1216 - A way to mark an entity to always use result cache. Like @UseResultCache class annotation.
  • DDC-1247 - Implement AnnotationDriver::addExcludePath
  • DDC-1563 - Result cache for repository queries
  • DDC-2021 - Array Data in Member OF
  • DDC-2773 - #835 Value objects (Based on #634)
  • DDC-2959 - #937 Extra-lazy for containsKey on collections
  • DDC-3117 - #1027 Support for Partial Indexes for PostgreSql and Sqlite
  • DDC-3161 - #1054 SQLFilters enahancements
  • DDC-3186 - #1069 added method to be able to reuse the console application
  • DDC-3231 - #1089 Entity repository generator default repository
  • DDC-3300 - #1130 Added resolve entities support in discrim. map
  • DDC-3385 - #1181 Support fetching entities by aliased name
  • DDC-3462 - #1230 Allow dumping SQL query when passing DQL on cli
  • DDC-3503 - #1257 Resolve target entity also in discriminator map (allows interfaces and custom names in discriminator map)
  • DDC-3567 - #1303 make QueryBuilder::getAllAliases public

Improvement

  • DDC-54 - Trigger postLoad events and callbacks after associations have been initialized
  • DDC-1590 - Fix Inheritance in Code-Generation
  • DDC-1787 - Fix for JoinedSubclassPersister, multiple inserts with versioning throws an optimistic locking exception
  • DDC-1858 - LIKE and IS NULL operators not supported in HAVING clause
  • DDC-2052 - Custom tree walkers are not allowed to add new components to the query
  • DDC-2061 - Matching Criteria on a PersistentCollection only works on OneToMany associations
  • DDC-2128 - #507 Now MetaDataFilter takess also regexp. For example whern you want to
  • DDC-2183 - Second Level Cache improvements
  • DDC-2210 - PHP warning in ProxyFactory when renaming proxy file
  • DDC-2217 - Return a lazy collection from PersistentCollection::match($criteria)
  • DDC-2319 - #590 DQL Query: process ArrayCollection values to ease development
  • DDC-2534 - #711 Coveralls code coverage
  • DDC-2538 - #713 Quick grammar fix
  • DDC-2544 - #717 Allow query parameters starting with an underscore
  • DDC-2546 - #719 Access properties via static:: instead of self::.
  • DDC-2615 - LIKE operator not supported in HAVING clause
  • DDC-2636 - Handle SQLite with dot notation in @Table and @JoinTable
  • DDC-2639 - #771 Added indexBy option to createQueryBuilder
  • DDC-2770 - #833 Generate-Entities-Console-Command: Adding an ‘avoid backup’ flag
  • DDC-2789 - #844 Teach orm:validate-schema to –skip-mapping and –skip-sync
  • DDC-2794 - the Paginator does not support arbitrary join
  • DDC-2814 - #858 lifts an unnecessary restriction on ResultSetMappingBuilder
  • DDC-2824 - #863 The new configuration option: defaultQueryHints
  • DDC-2861 - #881 Fix persistence exception on a table with a schema on a platform without schema support
  • DDC-2865 - #882 Efficient counting on Criteria
  • DDC-2868 - #885 Add support for ManyToMany Criteria
  • DDC-2926 - #914 added license badge
  • DDC-2970 - #946 Cleaned up unused imports
  • DDC-2981 - Multi get for second level cache (Doctrine Cache related)
  • DDC-2982 - #954 Multi Get support for Second Level Cache
  • DDC-2984 - Support Custom DBAL types to be used as identifiers
  • DDC-2991 - #957 makes doctrine less dependent upon the symfony yaml component
  • DDC-2999 - #962 Stop executeDeletions when there is nothing to to delete anymore
  • DDC-3000 - #963 SQLFilter – allows to check if a parameter was set
  • DDC-3004 - #966 Simplify build matrix
  • DDC-3005 - Events::postLoad fires without filled associations
  • DDC-3014 - #973 Added index flags support in annotation, xml & yaml mapping drivers.
  • DDC-3032 - #980 Added options attribute export to Annotation, Xml & Yaml exporters.
  • DDC-3039 - #983 Added MEMBER OF and INSTANCE OF to ExpressionBuilder
  • DDC-3068 - EntityManager::find does not accept an array of object as a primary key
  • DDC-3070 - #1001 DDC-3005 Defer invoking of postLoad event to the end of hydration cycle.
  • DDC-3076 - #1006 Handling invalid discriminator values
  • DDC-3114 - #1026 Remove some redundant clauses
  • DDC-3133 - #1036 Move space addition to implementation.
  • DDC-3138 - #1037 I can’t look at those semicolons, sorry ;-)
  • DDC-3150 - #1047 Minor grammatical corrections
  • DDC-3178 - #1064 remove on-update from join-column
  • DDC-3249 - #1105 Add support for nesting embeddables
  • DDC-3257 - #1112 DefaultRepositoryFactory: single repository for aliased entities
  • DDC-3258 - #1113 Added support for composite primary key on findBy methods and Criteria
  • DDC-3274 - Improve schema validator error message
  • DDC-3275 - #1121 DDC-3274 Improve schema validator error message for invalid bi-directional relations
  • DDC-3276 - #1122 Support arithmetic expressions in COUNT()
  • DDC-3304 - [EntityGenerator] Embeddables properties and methods are broken
  • DDC-3305 - #1133 [Embeddables] Improved exception message
  • DDC-3307 - #1135 DDC-3304 Add support for embeddables in entity generator
  • DDC-3418 - Indexes not inherited from mapped superclass
  • DDC-3457 - #1227 Ensure query cache is not ArrayCache in production
  • DDC-3461 - #1229 Identity in onetoone association builder
  • DDC-3477 - #1238 Avoid prefixing columns when false is assigned to column-prefix
  • DDC-3479 - #1240 Include IDs in the exception message to ease debugging
  • DDC-3483 - #1243 Fixed phpunit tests autoload requirements and moved to composer autoload-dev
  • DDC-3486 - #1245 Implemented support for one to many extra lazy with joined inheritance.
  • DDC-3487 - #1246 Moved delete() and update() to proper locations.
  • DDC-3490 - #1248 improved error handling for invalid association values #2
  • DDC-3492 - #1249 Support for extra lazy get for both owning and inverse side on many to many associations.
  • DDC-3495 - #1251 travis: optimize to run coverage only once
  • DDC-3496 - #1252 Include className in calls to NamingStrategy joinColumnName method
  • DDC-3501 - #1255 Cleanup: PHP 5.3 support end
  • DDC-3504 - #1258 Classify persisters into more granular namespaces.
  • DDC-3514 - LimitSubqueryOutputWalker should not duplicate orderBy clauses
  • DDC-3521 - #1269 DDC-3520 self-update composer before install
  • DDC-3528 - #1274 PersistentCollection now extends AbstractLazyCollection.
  • DDC-3541 - #1286 Removing XDebug from non-coverage builds
  • DDC-3546 - #1289 Improve test suite
  • DDC-3549 - #1292 Mark getSelectConditionStatementColumnSQL method as private
  • DDC-3588 - #1314 DATE_ADD - Support for seconds
  • DDC-3590 - #1316 Allow to join non-public schema tables
  • DDC-3594 - #1319 travis: PHP 7.0 nightly added
  • DDC-3607 - #1326 Allow AssociationBuilder to set a relation as orphan removal
  • DDC-3630 - #1343 Support embeddables in partial object query expression DDC-3621
  • DDC-2850 - Allow cascaded clearing of Entities associated to the indicated Entity

Bugfix

  • DDC-1624 - Locking CTI doesnt work on SQL Server
  • DDC-2310 - Recent changes to DBAL SQL Server platform lock hinting breaks ORM SqlWalker in DQL queries with joins
  • DDC-2352 - #615 Update SqlWalker.php
  • DDC-2372 - #632 entity generator - ignore trait properties and methods
  • DDC-2504 - #696 extra lazy joined test
  • DDC-2559 - #728 Color message like the update tools
  • DDC-2561 - #729 add missing hint about lifecycle callback
  • DDC-2562 - #730 To avoid “SpacingAfterParams” error with PHPCS Symfony2 coding standard
  • DDC-2566 - #732 Update working-with-associations.rst
  • DDC-2568 - #733 Update Parser.php
  • DDC-2572 - ResolveTargetEntityListener does not work as documented.
  • DDC-2573 - #735 Fix proxy performance test
  • DDC-2575 - Hydration bug
  • DDC-2580 - #739 Fix DDC-2579
  • DDC-2581 - #740 Synchronized support of FilterCollection with ODM by adding missing method
  • DDC-2584 - #743 Added coverage to DDC-2524. Updated DDC-1719 to fix related DBAL bug.
  • DDC-2588 - #745 Update basic-mapping.rst
  • DDC-2591 - #747 fix some file mode 755->644
  • DDC-2592 - #748 Add hour to DATE_ADD and DATE_SUB
  • DDC-2603 - #751 Added coverage for querying support during postLoad.
  • DDC-2604 - #752 ORM side fixes.
  • DDC-2616 - #759 Fixed out of sync code examples in getting-started.rst
  • DDC-2624 - ManyToManyPersister fails to handle cloned PeristentCollections
  • DDC-2652 - #777 Fixed typo in mapping documentation
  • DDC-2653 - #778 Fixed typo in property mapping
  • DDC-2654 - #779 Fixed grammar in custom data types
  • DDC-2656 - #780 [DCC-2655] Don’t let getOneOrNullResult throw NoResultException
  • DDC-2668 - DQL TRIM function is not converted into TRIM SQL correctly
  • DDC-2673 - #785 Update dql-custom-walkers.rst
  • DDC-2676 - #786 Minor updates while reading the basic-mapping page
  • DDC-2678 - #787 Update DDC719Test.php to be compatible with MsSQL
  • DDC-2681 - #790 HHVM compatibility: func_get_args
  • DDC-2682 - #791 Implemented “contains” operator for Criteria expressions
  • DDC-2683 - #792 DDC-2668 Fix trim leading zero string
  • DDC-2689 - Doctrine ORM test suite failing on MySQL
  • DDC-2690 - Doctrine ORM test suite failing on PostgresSQL
  • DDC-2696 - #795 Update query-builder.rst
  • DDC-2699 - #797 CS fixes
  • DDC-2700 - #798 Identifier can be empty for MappedSuperclasses
  • DDC-2702 - #799 remove unused test case
  • DDC-2704 - When using Discriminator EntityManager#merge fails
  • DDC-2706 - #801 Update SqlWalker.php fixed wrong GROUP BY clause on SQL Server platform
  • DDC-2707 - #802 Respect unsigned fields when tables get converted to entities.
  • DDC-2711 - #803 Appended newline to (newly) generated files for PSR2 compatibility
  • DDC-2716 - #808 Second level cache
  • DDC-2718 - #809 Fix DDC-1514 test
  • DDC-2720 - #811 Update SingleScalarHydrator error message
  • DDC-2722 - #812 [Doc] add direct links to dbal and dql documentation
  • DDC-2728 - #815 Remove unused use statement
  • DDC-2732 - #816 Options not respected for ID Fields in XML Mapping Driver
  • DDC-2737 - #817 Removed “minimum-stability” : “dev” from composer.json
  • DDC-2738 - #818 Clarified tutorial context in section introducing orm:scehma-tool:* commnads
  • DDC-2740 - #819 Fixes a Fatal Error when using a subexpression in parenthesis
  • DDC-2741 - #820 Added support for field options to FieldBuilder
  • DDC-2750 - #822 DDC-2748 DQL expression “in” not working with Collection
  • DDC-2753 - #824 s/PostgreSQLPlatform/PostgreSqlPlatform/
  • DDC-2757 - Manual transcation handling not possible when transaction fails, documentation gives wrong example
  • DDC-2759 - ArrayHydration: Only first entity in OneToMany association is hydrated
  • DDC-2760 - #827 Added a failing test case for DDC-2759.
  • DDC-2764 - An orderBy on Criteria leads to DQL semantical error
  • DDC-2765 - #830 DDC-2764 Prefix criteria orderBy with rootAlias
  • DDC-2769 - #832 Added “readOnly: true” to YAML reference
  • DDC-2771 - #834 Add example use of repositoryClass in YAML
  • DDC-2774 - #836 Update annotations-reference.rst
  • DDC-2775 - Bug with cascade remove
  • DDC-2782 - #842 Added EntityManager query creation tests
  • DDC-2790 - #845 Don’t compute changeset for entities that are going to be deleted
  • DDC-2792 - #846 joinColumn is not required in manyToMany
  • DDC-2798 - #849 Error with Same Field, Multiple Values, Criteria and QueryBuilder
  • DDC-2799 - #850 Event listener to programmatically attach entity listeners.
  • DDC-2811 - #854 fix relative path to doctrine/common
  • DDC-2812 - #856 Fix dependency for tests/Doctrine/Tests/ORM/Functional/ReferenceProxyTest.php
  • DDC-2827 - #864 Updated parser to support aggegrate functions in null comparisons
  • DDC-2831 - #866 Mentioning the ‘refresh’ cascading property in the documentation list
  • DDC-2843 - SchemaTool update SQL always contains queries to set default value on columns, even if they haven’t changed.
  • DDC-2847 - #871 XCache cannot be flushed on the CLI -> for pretty much the same reason as APC
  • DDC-2853 - #873 Try running unit tests on HHVM
  • DDC-2855 - #875 Adding tests that confirm that DDC-2845 is fixed
  • DDC-2856 - #876 Fixing wrong key for allowing HHVM failures
  • DDC-2862 - When update cached entitiy, entity lost OneToOne relationship
  • DDC-2866 - #883 DDC-2862 Fix non initialized association proxy
  • DDC-2867 - #884 [SLC] Fix cache misses using one-to-one inverse side
  • DDC-2869 - #886 DDC-1256 Fix applying ON/WITH conditions to first join in Class Table Inheritance
  • DDC-2875 - #890 [DBAL-563] Add general IDENTITY generator type support for sequence emulating platforms
  • DDC-2876 - #891 Allow to not generate extra use
  • DDC-2878 - #893 autoGenerate arg from bool to int
  • DDC-2880 - #894 Fix typos - QueryBuilder
  • DDC-2884 - #896 Ensure elements preceed
  • DDC-2885 - #897 Respected ‘inheritanceType’ at Entity level
  • DDC-2889 - #900 Fix connection mock fetchColumn signature
  • DDC-2890 - Paginator generates invalid sql for some dql with setUseOutputWalkers(false) and $fetchJoinCollection = true
  • DDC-2903 - #906 removed erroneous tip
  • DDC-2907 - #907 DDC-1632 OneToMany Fetch eager
  • DDC-2908 - #908 DDC-2862 Fix lazy association load
  • DDC-2913 - #909 Fix DatabaseDriverTest on SQL Server
  • DDC-2914 - #910 DDC-2310 Fix SQL generation on table lock hint capable platforms
  • DDC-2916 - #911 fix foreach coding style
  • DDC-2919 - LockMode::NONE evaluation inconsistencies in ORM
  • DDC-2921 - #912 Avoid PersistentCollection::isEmpty() to fully load the collection.
  • DDC-2931 - OneToOne self-referencing fails when loading referenced objects
  • DDC-2933 - #917 DDC-2931
  • DDC-2934 - #918 Fix use of function in OrderBy
  • DDC-2935 - #919 tests for DDC-2890
  • DDC-2937 - #920 SingleScalarHydrator reports ambiguous error.
  • DDC-2943 - Paginator not work with second level cache in Doctrine 2.5
  • DDC-2946 - #926 Feature/console em helper interface
  • DDC-2947 - #927 s/EntityManager/EntityManagerInterface/ in a few places
  • DDC-2948 - #928 Support PHPUnit 3.8+ Compatibility
  • DDC-2952 - #932 DDC-2919 Make lock mode usage consistent
  • DDC-2956 - #934 faild test with multiple HINT_CUSTOM_TREE_WALKERS
  • DDC-2957 - #935 Remove incorrect (outdated) validation for public fields in SchemaValidator
  • DDC-2958 - #936 Making testing dependencies explicit
  • DDC-2961 - #938 Missing join-tables added in example
  • DDC-2967 - #943 Validate embeddables do not contain other embeddables.
  • DDC-2968 - #944 Fixed InputOption modes
  • DDC-2969 - #945 Fix CS
  • DDC-2971 - #947 Cleaned up further unused imports.
  • DDC-2974 - #950 Can cache empty collections
  • DDC-2975 - #951 More informational entity not found exception
  • DDC-2976 - #952 Add DB-level onDelete CASCADE example
  • DDC-2989 - ORM should allow custom index names for foreign associations.
  • DDC-2996 - UnitOfWork::recomputeSingleEntityChangeSet() will not add a new change set
  • DDC-2997 - #960 allow passing EntityManagerInterface when creating a HelperSet
  • DDC-2998 - #961 DDC-2984 Provide TestCase to reproduce bug
  • DDC-3002 - #964 [SLC][DDC-2943](http://www.doctrine-project.org/jira/browse/DDC-2943) Disable slc for pagination queries
  • DDC-3003 - #965 [SLC] Add support for criteria
  • DDC-3008 - #967 [SLC] Add query builder options
  • DDC-3009 - #968 Test: Add failing test
  • DDC-3010 - #969 [Doc] added note about Criteria limits on PersistentCollection
  • DDC-3012 - #971 [SLC] Fix query association proxy
  • DDC-3013 - #972 Capitalize @GeneratedValue (annotations-reference.rst)
  • DDC-3015 - #974 [SLC] Resolve association cache entry
  • DDC-3018 - DQL “NEW” Operator and Literal type “String”
  • DDC-3021 - #976 Add cache invalidation strategy to AbstractQuery
  • DDC-3023 - #977 Fix wrong annotation
  • DDC-3028 - #978 DDC-2987 Enable empty prefixes for inlined embeddable
  • DDC-3033 - Regression in computeChangeSets (ManyToMany relation)
  • DDC-3038 - #982 Failing Test (since commit 53a5a48aed7d87aa1533c0bcbd72e41b686527d8)
  • DDC-3041 - #984 Use boolean values for ‘unique’ attribute
  • DDC-3042 - select issue field names with numbers
  • DDC-3045 - SQL Injection in Persister API
  • DDC-3047 - XML Exporter driver does not export association fetch-mode
  • DDC-3049 - #988 Exporter support for association fetch modes
  • DDC-3054 - #991 Ability to define custom functions with callback instead of class name
  • DDC-3058 - #993 Update JoinColumn.php
  • DDC-3060 - #995 Allow cascaded clearing of associated Entities
  • DDC-3061 - #996 DDC-3027 Embedded in MappedSuperclass
  • DDC-3065 - Generated ‘IN’ clause doesn’t handle ‘null’ values (needs to add ‘IS NULL’ check)
  • DDC-3067 - #999 DDC-3065 null value in in criteria support
  • DDC-3069 - #1000 DDC-3068 EntityManager::find accept array of object as id
  • DDC-3071 - #1002 Fixed wrongly initialized property.
  • DDC-3074 - #1004 Removed all useless occurrence of require_once TestInit.php
  • DDC-3075 - #1005 Added support of the subselect expressions into NEW expressions
  • DDC-3078 - Doctrine::__construct is in an interface
  • DDC-3080 - #1008 DDC-3078 SLC Cache interface ctor removal
  • DDC-3081 - #1009 HHVM compatibility
  • DDC-3082 - #1010 Fixed validation message
  • DDC-3085 - NULL comparison are not supported for result variables in the HAVING clause
  • DDC-3092 - #1012 Ddc 3078 slc cache interface ctor removal
  • DDC-3093 - #1013 Remove SimpleXmlElement hack
  • DDC-3095 - #1014 Update second level cache doc
  • DDC-3100 - #1018 DBAL-878 Wrong mapping type
  • DDC-3103 - Is embedded class information in ClassMetadata is not stored when serializing.
  • DDC-3106 - #1023 DDC-3027 Avoid duplicated mapping using Embedded in MappedSuperclass
  • DDC-3107 - #1024 [Persister] Remove the insertSql cache
  • DDC-3108 - Criteria cannot reference a joined tables’ fields when used with an ORM QueryBuilder
  • DDC-3118 - #1028 Add method getAssociationsByType to ClassMetadata
  • DDC-3120 - Warning: Erroneous data format for unserializing PHP5.6+
  • DDC-3123 - Extra updates are not cleaned after execution
  • DDC-3124 - #1030 DDC-3123 extra updates cleanup
  • DDC-3129 - #1032 Add support for optimized contains
  • DDC-3143 - #1041 Allow all EntityManagerInterface implementations
  • DDC-3151 - #1048 Fix typo in exception message
  • DDC-3152 - Generating methods does not check for existing methods with different case
  • DDC-3160 - Regression in reComputeSingleEntityChangeset
  • DDC-3177 - #1063 singularize variable name on add/remove methods for EntityGenerator
  • DDC-3190 - #1071 Setup::createConfiguration breaks Cache interface contract
  • DDC-3191 - #1072 Fix attempt of traversing bool in FileLockRegion
  • DDC-3192 - Custom types do not get converted to PHP Value when result is gotten from custom query
  • DDC-3198 - #1075 Fixed query cache id generation: added platform to hash
  • DDC-3199 - #1076 Fix switch non-uniform syntax
  • DDC-3210 - #1080 possible fix for DDC-2021
  • DDC-3214 - #1082 added more informative error messages when invalid parameter count
  • DDC-3223 - Failing test (get id return string type)
  • DDC-3225 - #1087 Remove the error control operator
  • DDC-3227 - #1088 Fix the composer autoload paths for the doctrine CLT
  • DDC-3233 - #1092 Arbitrary Join count walkers solution
  • DDC-3237 - #1096 Changes for grammar and clarity
  • DDC-3239 - #1097 expandParameters/getType in BasicEntityPersister seems to really cover just few cases
  • DDC-3240 - #1098 #DDC-1590: Fix Inheritance in Code-Generation
  • DDC-3254 - #1111 Fix inheritance hierarchy wrong exception message
  • DDC-3269 - #1120 DDC-3205 Metadata info
  • DDC-3272 - EntityGenerator writes ‘MappedSuperClass’ instead of ‘MappedSuperclass’
  • DDC-3278 - #1123 Fixed the structure of the reverse-engineered mapping
  • DDC-3283 - #1125 Update improving-performance.rst
  • DDC-3288 - #1126 Fixed new line in docblock
  • DDC-3293 - XML Mappings disallow disabling column prefix for embeddables
  • DDC-3302 - #1132 DDC-3272 entity generator mapped superclass casing
  • DDC-3310 - #1138 Join column index names
  • DDC-3318 - #1143 Fixed a bug so that a versioned entity with a oneToOne id can be created
  • DDC-3322 - #1146 Allow orderBy to reference associations
  • DDC-3336 - Undefined property: Doctrine::$field
  • DDC-3341 - SessionValidator gives an error message on orderBy association, but it is no error.
  • DDC-3343 - PersistentCollection::removeElement schedules an entity for deletion when relationship is EXTRA_LAZY, with orphanRemoval false.
  • DDC-3346 - findOneBy returns an object with partial collection for the properties with mapping oneToMany/Fetch Eager
  • DDC-3350 - #1160 #1159 - multiple entity managers per repository factory should be supported
  • DDC-3355 - #1164 [QueryBuilder] Remove unused method parameters to run on HHVM/PHP7
  • DDC-3358 - #1166 Fixing HHVM+XSD validation tests as of documented HHVM inconsistencies
  • DDC-3368 - #1172 Don’t initialize detached proxies when merging them.
  • DDC-3370 - #1173 Fix merging of entities with associations to identical entities.
  • DDC-3378 - #1176 Support merging entities with composite identities defined through to-one associations
  • DDC-3379 - #1177 Ensure metadata cache is not ArrayCache in production
  • DDC-3380 - #1178 Fixing associations using UUIDs
  • DDC-3387 - #1182 #1086 identifier type in proxies
  • DDC-3394 - UOW CreateEntity failure with zerofill columns
  • DDC-3404 - #1188 Fixed counting exception
  • DDC-3419 - #1196 Inherit indexes from mapped superclass
  • DDC-3425 - #1202 Checks key exists rather than isset
  • DDC-3427 - Doctrineexplicitly accepts EntityManager
  • DDC-3428 - #1204 Fix sequence-generator in MetaData exporter for XML Driver.
  • DDC-3429 - #1205 Hotfix - #1200 symfony 2.7 deprecation fixes
  • DDC-3430 - #1206 matching should not change critera
  • DDC-3431 - #1207 Embedded classes reflection new instance creation with internal PHP classes
  • DDC-3432 - #1208 DDC-3427 - class metadata factory should accept EntityManagerInterface instances
  • DDC-3433 - #1210 DDC-3336 - undefined property with paginator walker and scalar expression in ORDER BY clause
  • DDC-3434 - LimitSubqueryOutputWalker does not retain correct ORDER BY expression fields when dealing with HIDDEN sort fields
  • DDC-3435 - #1211 DDC-3434 - paginator ignores HIDDEN fields in ORDER BY query
  • DDC-3436 - #1212 DDC-3108 Fix regression where join aliases were no longer accessible in Criteria expressions
  • DDC-3437 - #1213 fix instantiation of embedded object in ReflectionEmbeddedProperty
  • DDC-3439 - #1216 test XML export driver, the field options, for #1214
  • DDC-3452 - #1222 Embeddables in metadata builder
  • DDC-3454 - #1224 Updated setParameters function for not replace all parameters
  • DDC-3466 - #1233 [Minor] Refactoring to avoid duplicate code
  • DDC-3470 - #1235 Consistent return type confirming with interface
  • DDC-3478 - #1239 Fix index duplication for unique association join columns
  • DDC-3482 - #1242 Attempting to lock a proxy object fails as UOW doesn’t init proxy first
  • DDC-3493 - New (PHP 5.5) “class” keyword - wrong parsing by EntityGenerator
  • DDC-3494 - #1250 Test case for “class” keyword
  • DDC-3502 - #1256 DDC-3493 - fixed EntityGenerator parsing for php 5.5 ”::class” syntax
  • DDC-3506 - #1259 Hotfix: Cache region should not mutate injected cache instance settings
  • DDC-3513 - #1262 Fixes the broken DQL command
  • DDC-3517 - #1265 Fix error undefined index “targetEntity” in persister
  • DDC-3524 - #1272 DDC-2704 - merge inherited transient properties - merge properties into uninitialized proxies
  • DDC-3534 - #1280 DDC-3346 #1277 find one with eager loads is failing
  • DDC-3536 - #1281 Hotfix/#1169 extra lazy one to many should not delete referenced entities
  • DDC-3538 - #1283 #1267 - order by broken in pagination logic (reverts #1220)
  • DDC-3544 - #1288 Hotfix - #1169 - extra lazy one to many must be no-op when not doing orphan removal
  • DDC-3551 - #1294 Avoid Connection error when calling ClassMetadataFactor::getAllMetadata()
  • DDC-3554 - #1295 Fix join when recreation of query from parts.
  • DDC-3564 - #1301 Add failing test with ToOne SL2 association
  • DDC-3566 - #1302 Store column values of not cache-able associations
  • DDC-3585 - #1311 DDC-3582 Wrong class is instantiated when using nested embeddables
  • DDC-3586 - #1312 Add proper pluralization into UpdateCommand
  • DDC-3587 - #1313 Added programmatical support to define indexBy on root aliases.
  • DDC-3597 - #1321 embeddedClasses support in mapped superclasses
  • DDC-3606 - #1325 fixed PostgreSQL and Oracle pagination issues
  • DDC-3608 - #1327 Properly generate default value from yml & xml mapping
  • DDC-3616 - #1333 Allow DateTimeImmutable as parameter value
  • DDC-3619 - spl_object_hash collision
  • DDC-3622 - #1336 Fix UoW warning with custom id object types
  • DDC-3623 - #1337 Paginator OrderBy fix take 2
  • DDC-3624 - #1338 DDC-3619 Update identityMap when entity gets managed again
  • DDC-3625 - #1339 DDC-2224 Honor convertToDatabaseValueSQL() in DQL query parameters
  • DDC-3629 - #1342 Paginator functional tests
  • DDC-3631 - #1344 Fix tests for SLC console commands failing due to console output decoration
  • DDC-3632 - #1345 Fix crashes in ConvertMappingCommand and GenerateEntitiesCommand...
  • DDC-3634 - #1346 Fix: generated IDs are converted to integer
  • DDC-3641 - #1350 Assigned default value to array
  • DDC-3643 - #1352 fix EntityGenerator RegenerateEntityIfExists
  • DDC-3645 - #1353 Paginator fixes take3
  • DDC-3650 - #1357 Drop useless execution bit

Documentation

Please report any issues you may have with the update on the mailing list or on JIRA.

Favicon for Doctrine Project 02:00 Doctrine Common 2.5.0 Release » Post from Doctrine Project Visit off-site link

Doctrine Common 2.5.0 Release

We are happy to announce the immediate availability of Doctrine Common 2.5.0.

Installation

You can install this version of Doctrine Common by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/common": "2.5.0"
    }
}

Changes since 2.4.x

This is a list of issues solved in 2.5.0 since 2.4.x:

Bug

  • [DCOM-129] - Annotation parser matches colon after annotation
  • [DCOM-151] - [GH-233] [DocParser] Fix trying include classes if its must be ignored.
  • [DCOM-162] - [GH-244] return parameter for debug method
  • [DCOM-168] - ignoredAnnotationNames doesn’t work in Annotation loop
  • [DCOM-175] - Proxies return private properties in __sleep, which is not supported by PHP.
  • [DCOM-191] - Wrong inflection for “identity”
  • [DCOM-212] - [GH-296] Proxies shouldn’t serialize static properties in __sleep()
  • [DCOM-216] - [GH-298] Silence E_NOTICE warning: “Undefined index”.
  • [DCOM-220] - [GH-304] fix typo
  • [DCOM-223] - [GH-308] fix the optimize regex and adapt the tests to actually test classAnnotat...
  • [DCOM-228] - [GH-312] Improve UnexpectedValueException error message
  • [DCOM-261] - [GH-344] Fix fatal error when classexist tries to call the protected loader
  • [DCOM-270] - [GH-351] Added validation where allowed QCNs with ”:” NS separator
  • [DCOM-272] - Proxy generator doesn’t understand splat operator (PHP 5.6 argument unpacking)

Documentation

Improvement

New Feature

  • [DCOM-257] - [GH-342] Class metadata loading fallback hook in AbstractClassMetadataFactory
  • [DCOM-277] - [GH-357] Custom namespace separators for SymfonyFileLocator

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Wednesday 01 April, 2015

Favicon for Grumpy Gamer 09:00 Once Again... » Post from Grumpy Gamer Visit off-site link

In what's become a global internet tradition that will be passed down for generations to come...

Grumpy Gamer is 100% April Fools' joke free because April Fools' Day is a stupid fucking tradition.  There.  I said what everyone is thinking.


Favicon for Doctrine Project 02:00 Indoctrinator 0.0.1-alpha1 » Post from Doctrine Project Visit off-site link

Indoctrinator 0.0.1-alpha1

We are happy to announce the start of development on a new project called the indoctrinator.

What is Indoctrinator?

For various months, we tried to implement a way to validate the correct usage of the Doctrine Project mapping tools. This sort of validation logic includes:

  • immutability checks/suggestions
  • number of generated DB queries/hits reduction
  • memory impact control
  • hydration profiling
  • code generator avoidance
  • DDD (Domain Driven Development) entity class/method naming conventions
  • ... and much more!

We decided to put these validation rules into a project.

How does Indoctrinator work?

Indoctrinator is currently only working with doctrine/orm version 2.5.x-dev, but the general working concept is as following:

$indoctrinator = new Doctrine\Indoctrinator();

$indoctrinator->registerWithManager(new Doctrine\Indoctrinator\ManagerWrapper($entityManager));

Without going into much details, Indoctrinator hooks into common APIs used in ORM internals, and by using AOP (Aspect Oriented Programming), it catches common mistakes and issues and produces exceptions or log messages that “indoctrinate” the user on correct toolchain usage.

Release RoadMap

Indoctrinator is still in early development, but our plan is to release it with bindings for major editors and IDEs used in the PHP community.

The current version is 0.0.1-alpha1, and is released as a phar archive for now.

Development will likely take 6 or more months, while we stabilize the API and make the various mapper projects compatible with it.

How to get Indoctrinator?

Indoctrinator has its own dedicated documentation section in the doctrine website.

Reporting Issues

Please report any issues you may have with the project on the mailing list or on JIRA.

News stories from Tuesday 31 March, 2015

Favicon for Doctrine Project 02:00 Doctrine Mongo ODM Module release 0.8.2 » Post from Doctrine Project Visit off-site link

Doctrine Mongo ODM Module release 0.8.2

The Zend Framework Integration Team is happy to announce the new release of DoctrineMongoODMModule. DoctrineMongoODMModule 0.8.2 will be the last bugfix version with support for DoctrineModule 0.8, and in consequence, it is the last version that will support PHP 5.3. Further versions of the 0.8.* series may still be released in case of security issues.

Following issues were solved in this release:

To install this version, simply update your composer.json:

{
    "require": {
        "doctrine/doctrine-mongo-odm-module": "0.8.2"
    }
}
Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.0-RC2 Release Candidate » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.0-RC2 Release Candidate

We are happy to announce the immediate availability of Doctrine ORM 2.5.0-RC2.

This is a release candidate meant to allow users and contributors to verify the stability of the next iteration of the ORM.

We encourage all of our users to help us by trying out this release. Please report any possible problems or incompatibilities that may have been introduced during development.

What is new in 2.5.x?

We are currently in the process of documenting all the changes and new features that were introduced in Doctrine ORM 2.5.x.

You can find the current state of the 2.5.0 changes overview in the upgrade notes.

Release RoadMap

We expect to release following versions of the ORM in the next days:

  • 2.5.0 on 2015-04-02

Please note that these dates may change depending on the availability of our team.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/orm": "2.5.0-RC2"
    },
    "minimum-stability": "dev"
}

```

Changes since 2.5.0-RC1

This is a list of issues solved in 2.5.0-RC2 since 2.5.0-RC1:

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Wednesday 25 March, 2015

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.0-RC1 Release Candidate » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.0-RC1 Release Candidate

We are happy to announce the immediate availability of Doctrine ORM 2.5.0-RC1.

This is a release candidate meant to allow users and contributors to verify the stability of the next iteration of the ORM.

We encourage all of our users to help us by trying out this release. Please report any possible problems or incompatibilities that may have been introduced during development.

What is new in 2.5.x?

We are currently in the process of documenting all the changes and new features that were introduced in Doctrine ORM 2.5.x.

You can find the current state of the 2.5.0 changes overview in the upgrade notes.

Release RoadMap

We expect to release following versions of the ORM in the next days:

  • 2.5.0 on 2015-04-02

Please note that these dates may change depending on the availability of our team.

We also apologise for the major delays in this beta release, which are caused by the scarce availability of the core team in these months.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/orm": "2.5.0-RC1"
    },
    "minimum-stability": "dev"
}

```

Changes since 2.5.0-beta2

This is a list of issues solved in 2.5.0-RC1 since 2.5.0-beta1:

  • [DDC-3632] [#1345] Fix crashes in ConvertMappingCommand and GenerateEntitiesCommand when using entities with joined table inheritance
  • [DDC-3634] [#1346] Fix: generated IDs are converted to integer even when they are big integers
  • [DDC-3630] [DDC-3621] [#1343] Support embeddables in partial object query expression
  • [DDC-3623] [DDC-3629] [#1337] [#1342] Paginator functional tests and sorting corrections
  • [DDC-2224] [DDC-3625] [#1339] Honor convertToDatabaseValueSQ in DQL query parameters and caches

Please report any issues you may have with the update on the mailing list or on Jira.

Favicon for Doctrine Project 02:00 Doctrine Common 2.5.0-beta1 Pre-Release » Post from Doctrine Project Visit off-site link

Doctrine Common 2.5.0-beta1 Pre-Release

We are happy to announce the immediate availability Doctrine Common 2.5.0-beta1.

This is a pre-release meant to allow users and contributors to try out the new upcoming features of the Common package.

We encourage all of our users to help us by trying out this beta release. Please report any possible problems or incompatibilities that may have been introduced during development.

Starting from this release, no more new features or breaking changes will be allowed in the

Release RoadMap

We expect to release following versions of the Common package in the next days:

  • 2.5.0 on 2015-04-02

Please note that these dates may change depending on the availability of our team.

## Installation

You can install this version of the Common package by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/common": "2.5.0-beta2"
    },
    "minimum-stability": "dev"
}

Changes since 2.4.x

This is a list of issues solved in 2.5.0-beta1 since 2.4.x:

Bug

  • [DCOM-129] - Annotation parser matches colon after annotation
  • [DCOM-151] - [GH-233] [DocParser] Fix trying include classes if its must be ignored.
  • [DCOM-162] - [GH-244] return parameter for debug method
  • [DCOM-168] - ignoredAnnotationNames doesn’t work in Annotation loop
  • [DCOM-175] - Proxies return private properties in __sleep, which is not supported by PHP.
  • [DCOM-191] - Wrong inflection for “identity”
  • [DCOM-212] - [GH-296] Proxies shouldn’t serialize static properties in __sleep()
  • [DCOM-216] - [GH-298] Silence E_NOTICE warning: “Undefined index”.
  • [DCOM-220] - [GH-304] fix typo
  • [DCOM-223] - [GH-308] fix the optimize regex and adapt the tests to actually test classAnnotat...
  • [DCOM-228] - [GH-312] Improve UnexpectedValueException error message
  • [DCOM-261] - [GH-344] Fix fatal error when classexist tries to call the protected loader
  • [DCOM-270] - [GH-351] Added validation where allowed QCNs with ”:” NS separator
  • [DCOM-272] - Proxy generator doesn’t understand splat operator (PHP 5.6 argument unpacking)

Documentation

Improvement

New Feature

  • [DCOM-257] - [GH-342] Class metadata loading fallback hook in AbstractClassMetadataFactory
  • [DCOM-277] - [GH-357] Custom namespace separators for SymfonyFileLocator

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Tuesday 24 March, 2015

Favicon for ircmaxell's blog 17:00 Thoughts On The Design Of APIs » Post from ircmaxell's blog Visit off-site link
Developers as a whole suck at API design. We don't suck at making APIs. We don't suck at implementing them. We don't suck at using them (well, some more than others). But we do suck at designing them. In fact, we suck so much that we've made entire disciplines around trying to design better ones (BDD, DDD, TDD, etc). There are lots of reasons for this, but there are a few that I really want to focus on.

Read more »
Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=84nwSEFIm3k:IWUAxS7lyRQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Sunday 22 March, 2015

Favicon for Doctrine Project 02:00 Doctrine Data Fixtures 1.0.1 » Post from Doctrine Project Visit off-site link

Doctrine Data Fixtures 1.0.1

We are happy to announce the immediate availability Doctrine Data Fixtures 1.0.1.

In all semver fashion, this is a bug fix release.

What is new in 1.0.x?

Please report any issues you may have with the update on Github.

  • Added Travis: 69c2230
  • Now supports table quoting for dropping joined tables: #180
  • Fixed ProxyReferenceRepository which was forcing to have a getId: 8ffac1c
  • Fixed identifiers retrieval on ReferenceRepository if Entity is not yet managed my UnitOfWork: dfc0dc9
  • Doctrine dependencies relaxed: 83a910f
  • Fix purging non-public schema tables: #171

Release RoadMap

We expect to release following versions containing the pending patches in the next days:

  • 1.1.0 on 2015-03-26
  • 1.2.0 within 2015-04

Please note that these dates may change depending on the availability of our team.

Favicon for Doctrine Project 02:00 Doctrine Migrations 1.0.0-alpha3 Pre-Release » Post from Doctrine Project Visit off-site link

Doctrine Migrations 1.0.0-alpha3 Pre-Release

We are happy to announce the immediate availability Doctrine Migrations 1.0.0-alpha3.

This is a pre-release meant to allow users and contributors to try out the new upcoming features of the migrations.

We encourage all of our users to help us by trying out this alpha release. Please report any possible problems or incompatibilities that may have been introduced during development.

What is new in 1.0.x?

You can find the current state of the 1.0.0 changes overview in the upgrade notes.

Please report any issues you may have with the update on Github.

News stories from Friday 20 March, 2015

Favicon for Web Mozarts 11:05 Managing Web Assets with Puli » Post from Web Mozarts Visit off-site link

Yesterday marked the release of the last beta version of Puli 1.0. Puli is now feature-complete and ready for you to try. The documentation has been updated and contains all the information that you need to get started. My current plan is to publish a Release Candidate by the end of the month and a first stable release at the end of April.

The most important addition since the last beta release is Puli’s new Asset Plugin. Today, I’d like to show you how this plugin helps to manage the web assets of your project and your installed Composer packages independent of any specific PHP framework.

What is Puli?

You never heard of Puli before? In a nutshell, Puli is a resource manager built on top of Composer. Just like Composer generates an autoloader for the classes in your Composer packages, Puli generates a resource repository that contains all files that are not PHP classes (images, CSS, XML, YAML, HTML, you name it). You can access these resources by simple paths prefixed with the name of the package:

echo $twig->render('/acme/blog/views/footer.html.twig');

The only exceptions are end-user applications, which have the prefix /app by convention:

echo $twig->render('/app/views/index.html.twig');

Read Puli at a Glance to get a better high-level view of Puli’s features.

Update 2015/04/06

This post was updated in order to reflect that Puli’s Web Resource Plugin was renamed to “Asset Plugin”.

Web Assets

Some resources – such as templates or configuration files – are needed by the web server only. Others – like CSS files and images – need to be placed in a public directory, where browsers can download them. I’ll call these files web assets here.

Puli’s Asset Plugin takes care of two things:

  • installing web assets in their public location;
  • generating the URLs for these assets.

The public location for installing assets is called an install target in Puli’s language. Puli supports virtually any kind of install target, such as:

  • the document root of your own web server
  • the document root of another web server
  • a Content Delivery Network (CDN)

Install targets store three pieces of information:

  • their location (a directory path, a URL, …)
  • the used installer (symlink, copy, ftp, rsync, …)
  • their URL format

The URL format is used to generate URLs for the assets installed in the target. The default format is /%s, but you could set it to more elaborate values such as http://cdn.example.com/path/%s?v3.

Creating an Install Target

Let me walk you through a simple example of using the plugin for a typical project. We will work with the following setup:

  • the application’s assets are stored in the Puli path /app/public
  • the assets of the “acme/blog” package are stored in /acme/blog/public
  • all assets should be installed in the directory public_html

Before we can start, we need to install the plugin with Composer:

$ composer require puli/asset-plugin:~1.0

Make sure “minimum-stability” is set to “dev” in your composer.json file:

{
    "minimum-stability": "dev"
}

Activate the plugin with Puli’s Command Line Interface (CLI):

$ puli plugin install Puli\\AssetPlugin\\Api\\AssetPlugin

The plugin is loaded successfully if the command puli target succeeds:

$ puli target
No install targets. Use "puli target add <name> <directory>" to add a target.

Let’s create a target named “local” now that points to the aforementioned public_html directory:

$ puli target add local public_html

Run puli target again to see the target that you just added:

Result of the command "puli target"

Installing Web Assets

With the install target ready, we can now map resources to the target:

$ puli asset map /app/public /
$ puli asset map /acme/blog/public /blog

Let’s run puli asset to see the mappings we added:

The output of this command gives us a lot of information:

  • We added our assets to the default target, i.e. our only target “local”. In some cases, it is useful to have more than one install target.
  • The assets in /app/public will be installed in public_html.
  • The assets in /acme/blog/public will be installed in public_html/blog.

All that is left to do is installing the assets:

You should be able to access your assets in the browser now.

Generating Resource URLs

Now that our assets are publicly available, our application needs to generate their proper URLs. If you use Twig, you can use the asset_url() function of Puli’s Twig Extension to do that:

<!-- /images/header.png -->
<img src="{{ asset_url('/app/public/images/header.png') }}" />

The function accepts absolute Puli paths or paths relative to the Puli path of your template:

<img src="{{ asset_url('../images/header.png') }}" />

If you need to generate URLs in PHP code, you can use Puli’s AssetUrlGenerator. Add the following setup code to your bootstrap file or your Dependency Injection Container:

// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repository = $factory->createRepository();
$discovery = $factory->createDiscovery($repository);
 
// URL Generator setup
$urlGenerator = $factory->createUrlGenerator($discovery);

Asset URLs can be generated with the generateUrl() method of the URL generator:

// /images/header.png
$urlGenerator->generateUrl('/app/public/images/header.png');

Read the Web Assets guide in the Puli Documentation if you want to learn more about handling web assets with Puli.

The Future of Packages in PHP

With Puli and especially with Puli’s Asset Plugin, we have exciting new possibilities of creating Composer packages that work with different frameworks at the same time. Basically, a bundle/plugin/module/… of the framework of your choice is reduced to:

  • PHP code, which is autoloaded by Composer’s autoloader.
  • Resource files that are managed and published by Puli.
  • A thin layer of configuration files/code for integrating your Package with a framework of your choice.

Since the framework-dependent code is reduced to a few configuration files or classes, it is possible to add support for multiple frameworks at the same time. For open-source developers, that’s a great thing, because they have to maintain much less packages and code than they had to before. For users of open-source software, that’s a great thing too, because it becomes possible to use the magnificent package X with your framework Y, even though X was sadly developed for framework Z. I think that’s exciting. Do you?

Let me know what you think in the comments. Read the Web Assets guide in the Puli Documentation if you want to learn more about the plugin.

News stories from Wednesday 18 March, 2015

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.0 BETA 1 Released » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.0 BETA 1 Released

We are happy to announce the immediate availability Doctrine ORM 2.5.0-beta1.

Due to day-job related responsibilities, we are a month behind our schedule. Please bear with us as we prepare this new release.

This is a pre-release meant to allow users and contributors to try out the new upcoming features of the ORM.

We encourage all of our users to help us by trying out this beta release. Please report any possible problems or incompatibilities that may have been introduced during development.

Starting from this release, no more new features or breaking changes will be allowed into the repository until 2.6.x development starts.

What is new in 2.5.x?

We are currently in the process of documenting all the changes and new features that were introduced in Doctrine ORM 2.5.x.

You can find the current state of the 2.5.0 changes overview in the upgrade notes.

Release RoadMap

We expect to release following versions of the ORM in the next days:

  • 2.5.0-RC1 on 2015-03-25
  • 2.5.0 on 2015-04-02

Please note that these dates may change depending on the availability of our team.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/orm": "2.5.0-beta1"
    },
    "minimum-stability": "dev"
}

Changes since 2.5.0-alpha2

This is a list of issues solved in 2.5.0-beta1 since 2.5.0-alpha2:

  • [DDC-3452] Embeddables Support for ClassMetadataBuilder
  • [DDC-3551] Load platform lazily in ClassMetadataFactory to avoid database connections.
  • [DDC-3258] Improve suport for composite primary keys and associations as keys.
  • [DDC-3554] Allow to recreate DQL QueryBuilder from parts.
  • [DDC-3461] Allow setting association as primary key in ClassMetadataBuilder API with makePrimaryKey().
  • [DDC-3587] Added programmatical support to define indexBy on root aliases.
  • [DDC-3588] Add support for seconds in DATE_ADD DQL function.
  • [DDC-3585] Fix instantiation of nested embeddables.
  • [DDC-3607] Add support for orphan removal in ClassMetadataBuilder/AssocationBuilder
  • [DDC-3597] Add support for embeddables in MappedSuperclasses.
  • [DDC-3616] Add support for DateTimeImmutable in Query parameter detection.
  • [DDC-3622] Improve support for objects as primary key by casting to string in UnitOfWork.
  • [DDC-3619] Update IdentityMap when entity gets managed again fixing spl_object_hash collision.
  • [DDC-3608] Fix bug in EntityGenerator to XML/YML with default values.
  • [DDC-3590] Fix bug in PostgreSQL with naming strategy of non-default schema tables.
  • [DDC-3566] Fix bug in Second-Level Cache with association identifiers.
  • [DDC-3528] Have PersistentCollection implement AbstractLazyCollection from doctrine/collections.
  • [DDC-3567] Allow access to all aliases for a QueryBuilder.

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Monday 16 March, 2015

Favicon for ircmaxell's blog 21:30 Dimensional Analysis » Post from ircmaxell's blog Visit off-site link
There's one skill that I learned in College that I wish everyone would learn. I wish it was taught to everyone in elementary school, it's that useful. It's also deceptively simple. So without any more introduction, let's talk about Dimensional Analysis:

Read more »
Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=G3pB4SWqhQE:HCjPBt7fBcQ:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Thursday 12 March, 2015

Favicon for ircmaxell's blog 21:00 Security Issue: Combining Bcrypt With Other Hash Functions » Post from ircmaxell's blog Visit off-site link
The other day, I was directed at an interesting question on StackOverflow asking if password_verify() was safe against DoS attacks using extremely long passwords. Many hashing algorithms depend on the amount of data fed into them, which affects their runtime. This can lead to a DoS attack where an attacker can provide an exceedingly long password and tie up computer resources. It's a really good question to ask of Bcrypt (and password_hash). As you may know, Bcrypt is limited to 72 character passwords. So on the surface it looks like it shouldn't be vulnerable. But I chose to dig in further to be sure. What I found surprised me.

Read more »
Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=QBOnRvuovME:UqdW9-4aMo8:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 10 March, 2015

Favicon for Ramblings of a web guy 01:04 Using socket_connect with a timeout » Post from Ramblings of a web guy Visit off-site link
TL;DR

I was having trouble with socket connections timing out reliably. Sometimes, my timeout would be reached. Other times, the connect would fail after three to six seconds. I finally figured out it had to do with trying to connect to a routable, non-localhost address. This function is what I finally ended up with that reliably connects to a working server, fails quickly for a server that has an address/port that is not reachable and will reach the timeout for routable addresses that are not up.

I have put a version of my final function into a Gist on Github. I hope someone finds it useful.

Full Story

So, it seems that when you try and connect to an IP that is routable on the network, but not answering, the TCP stack has some built in timeouts that are not obvious. This differs from trying to connect to an IP address that is up, but not listening on a given port. We took a Gearman server down for maintenance and I noticed our warning logs were showing a 3 to 7 second delay between the attempt to queue jobs and the warning log. The timeout we had set was only 100ms. So, this seemed odd.

After a lot of messing around, a coworker pointed out that in production, the failures were happening for an IP that was routable on the network, but that had no host listening on the IP. I had been using localhost and some foreign port for my "failed" server. After using an IP that was local to our LAN but had no host listening on the IP, I was able to recreate it on a dev server. I figured out that if you set the send and receive timeouts really low before calling connect, you can loop while calling connect. You check the error state and timeout. As long as the error is an acceptable one and the timeout is not reached, keep trying until it connects. It works like a charm.

I found several similar examples to this on the web. However, none of them mixed all these techniques.

You can simply set the send and receive timeouts to your actual timeout and it will return quicker. However, the timeouts apply to the packets. And there are retry rules in place. So, I found that a 100ms timeout for each send and receive would wind up taking 500ms or so to actually fail. This was not what I wanted. I wanted more control. So, I set a 100 microsecond timeout during connect. This makes socket_connect return quickly. As long as the socket error is 115 (in progress) or 114 (already trying), we keep calling it. Unless of course our timeout is reached. Then we fail.

It works really well. Should help for doing server maintenance on our Gearman servers.

News stories from Saturday 21 February, 2015

Favicon for Grumpy Gamer 04:10 Thimbleweed Park Dev Blog » Post from Grumpy Gamer Visit off-site link

If you're wondering why it's so quiet over here at Grumpy Gamer, rest assured, it has nothing to do with me not being grumpy anymore.

The mystery can be solved by heading on over to the Thimbleweed Park Dev Blog and following fun antics of making a game.

News stories from Wednesday 11 February, 2015

Favicon for ircmaxell's blog 20:00 Scalar Types and PHP » Post from ircmaxell's blog Visit off-site link
There's currently a proposal that's under vote to add Scalar Typing to PHP (it has since been withdrawn). It's been a fairly controversial RFC, but at this point in time it's currently passing with 67.8% of votes. If you want a simplified breakdown of the proposal, check out Pascal Martin's excellent post about it. What I want to talk about is more of an opinion. Why I believe this is the correct approach to the problem.

I have now forked the original proposal and will be bringing it to a vote shortly.
Read more »
Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=qIFvtUtDnsI:hUzyqOIeQcw:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 03 February, 2015

Favicon for Ramblings of a web guy 05:02 Most epic ticket of the day » Post from Ramblings of a web guy Visit off-site link
UPDATE: I should clarify. This ticket is an internal ticket at DealNews. It is about what the defaults on our servers should be. It is not about what the defaults should be in MySQL. The frustration that UTF8 support in MySQL is only 3 bytes is quite real.

 This epic ticket of the day is brought to you by Joe Hopkinson.

#7940: Default charset should be utf8mb4
------------------------------------------------------------------------
 The RFC for UTF-8 states, AND I QUOTE:

 > In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
 accessible range) are encoded using sequences of 1 to 4 octets.

 What's that? You don't believe me?! Well, you can read it for yourself
 here!

 What is an octet, you ask? It's a unit of digital information in computing
 and telecommunications that consists of eight bits. (Hence, __oct__et.)

 "So what?", said the neck bearded MySQL developer dressed as Neo from the
 Matrix, as he smuggly quaffed a Surge and settled down to play Virtua
 Fighter 4 on his dusty PS2.

 So, if you recall from your Pre-Intro to Programming, 8 bits = 1 byte.
 Thus, the RFC states that the storage maximum storage requirements for a
 multibyte character must be 4 bytes, as required.

 I know that RFCs are more of GUIDELINE, right? It's not like they could be
 considered a standard or anything! It's not like there should be an
 implicit contract when an implementor decides to use a label like "UTF-8",
 right?

 Because of you, we have to strip our reader's carefully crafted emojii.
 Because of you, our search term data will never be exact. Because of you,
 we have to spend COUNTLESS HOURS altering every table that we have (which
 is a lot, by the way) to make sure that we can support a standard that was
 written in 2003!

 A cursory search shows that shortly after 2003, MySQL release quality
 started to tank. I can only assume that was because of you.

 Jerk.

 * The default charset should be utf8mb4.
 * Alter and test critical business processes.
 * Change OrderedFunctionSet to generate the appropriate tables.
 * Generate ptosc or propagator scripts to update everything else, as needed.
 * Curse the MySQL developer who caused this.

News stories from Thursday 29 January, 2015

Favicon for #openttdcoop 00:40 Server/DevZone Outtage » Post from #openttdcoop Visit off-site link

Hi,

As you may have noticed our services have received some outtage. This happend during a maintenance that was required for needed security updates related to CVE-2015-0235 (the glibc story / http://www.openwall.com/lists/oss-security/2015/01/27/9). When we rebooted the server the most scary thing happend for us. Our server did not return online. After some help from our hosting provider we managed to log back in.

To make the most out of this situation we immediatly also starting converting some of our local containers to a diskimage format (PLOOP / https://openvz.org/Ploop/Why). However because one of our main containers which has all the HG repositories has so many small files this conversion is taking longer then expected.

We want to apoligize for this situation and are waiting for this container conversion to finish. After this the most critical containers should all have been converted and most of the other ones are related to non-development stuff that should have no extended downtime like this.

Regards,

^Spike^

News stories from Tuesday 27 January, 2015

Favicon for #openttdcoop 21:26 RAWR!!! » Post from #openttdcoop Visit off-site link

Ladies and nutmen,

just now I am realizing I forgot to officially mention that I have been working on another project for the past months. RAWR Absolute World Replacement is currently 32bpp/ExtraZoom LANDSCAPE with ROADS and TRACKS. Eventually I am hoping to replace all the sprites the game needs, and the final output then could be a full base set.

Visually, the set is obviously 32bpp/ExtraZoom which looks relatively nice. Functionally, it lets you choose from the 4 climates and force any of them visually. That way you can apply any of them you want – especially if you load the newGRF as a static one. I hope you like it, there is still a lot of things to be done, but the core is there.

The project home is at the devzone per usual – you can also find a guide on how to apply static NewGRFs. I also have a thread at tt-forums, you are welcome to contribute/place your impressions/screenshots there 🙂

You can download RAWR from the online content – BaNaNaS – through the game, or from the website manually.
Enjoy and let me know what you think!

V

RAWR_001

News stories from Sunday 25 January, 2015

Favicon for Doctrine Project 02:00 Doctrine ORM 2.5.0-alpha2 Pre-Release » Post from Doctrine Project Visit off-site link

Doctrine ORM 2.5.0-alpha2 Pre-Release

We are happy to announce the immediate availability Doctrine ORM 2.5.0-alpha2.

This is a pre-release meant to allow users and contributors to try out the new upcoming features of the ORM.

We encourage all of our users to help us by trying out this alpha release. Please report any possible problems or incompatibilities that may have been introduced during development.

This pre-release is not yet at feature-freeze, therefore we urge contributors to contact us if there is any change that requires our attention before we reach the beta (feature-freeze) release stage.

What is new in 2.5.x?

We are currently in the process of documenting all the changes and new features that were introduced in Doctrine ORM 2.5.x.

You can find the current state of the 2.5.0 changes overview in the upgrade notes.

Release RoadMap

We expect to release following versions of the ORM in the next days:

  • 2.5.0-beta1 on 2015-02-02
  • 2.5.0-beta2 on 2015-02-09
  • 2.5.0 on 2015-02-16

Please note that these dates may change depending on the availability of our team.

Additionally, we will delay the release if any newly introduced critical bugs are detected, as it already happened with this 2.5.0-alpha2 release.

Installation

You can install this version of the ORM by using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/orm": "2.5.0-alpha2"
    },
    "minimum-stability": "dev"
}

Changes since 2.5.0-alpha1

This is a list of issues solved in 2.5.0-alpha2 since 2.5.0-alpha1:

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Tuesday 20 January, 2015

Favicon for Joel on Software 19:00 Stack Exchange Raises $40m » Post from Joel on Software Visit off-site link

Today Stack Exchange is pleased to announce that we have raised $40 million, mostly from Andreessen Horowitz.

Everybody wants to know what we’re going to do with all that money. First of all, of course we’re going to gold-plate the Aeron chairs in the office. Then we’re going to upgrade the game room, and we’re already sending lox platters to our highest-rep users.

But I’ll get into that in a minute. First, let me catch everyone up on what’s happening at Stack Exchange.

In 2008, Jeff Atwood and I set out to fix a problem for programmers. At the time, getting answers to programming questions online was super annoying. The answers that we needed were hidden behind paywalls, or buried in thousands of pages of stale forums.

So we built Stack Overflow with a single-minded, compulsive, fanatical obsession with serving programmers with a better Q&A site.

Everything about how Stack Overflow works today was designed to make programmers’ jobs easier. We let members vote up answers, so we can show you the best answer first. We don’t allow opinionated questions, because they descend into flame wars that don’t help people who need an answer right now. We have scrupulously avoided any commercialization of our editorial content, because we want to have a site that programmers can trust.

Heck, we don’t even allow animated ads, even though they are totally standard on every other site on the Internet, because it would be disrespectful to programmers to strain their delicate eyes with a dancing monkey, and we can’t serve them 100% if we are distracting them with a monkey. That would only be serving them 98%. And we’re OBSESSED, so 98% is like, we might as well close this all down and go drive taxis in Las Vegas.

Anyway, it worked! Entirely thanks to you. An insane number of developers stepped up to pass on their knowledge and help others. Stack Overflow quickly grew into the largest, most trusted repository of programming knowledge in the world.

Quickly, Jeff and I discovered that serving programmers required more than just code-related questions, so we built Server Fault and Super User. And when that still didn’t satisfy your needs, we set up Stack Exchange so the community could create sites on new topics. Now when a programmer has to set up a server, or a PC, or a database, or Ubuntu, or an iPhone, they have a place to go to ask those questions that are full of the people who can actually help them do it.

But you know how programmers are. They “have babies.”  Or “take pictures of babies.” So our users started building Stack Exchange sites on unrelated topics, like parenting and photography, because the programmers we were serving expected—nay, demanded!—a place as awesome as Stack Overflow to ask about baby feeding schedules and f-stops and whatnot.

And we did such a good job of serving programmers that a few smart non-programmers looked at us and said, “Behold! I want that!” and we thought, hey!  What works for developers should work for a lot of other people, too, as long as they’re willing to think like developers, which is the best way to think. So, we decided that anybody who wants to get with the program is welcome to join in our plan. And these sites serve their own communities of, you know, bicycle mechanics, or what have you, and make the world safer for the Programmer Way Of Thinking and thus serve programmers by serving bicycle mechanics.

In the five years since then, our users have built 133 communities. Stack Overflow is still the biggest. It reminds me of those medieval maps of the ancient world. The kind that shows a big bustling city (Jerusalem) smack dab in the middle, with a few smaller settlements around the periphery. (Please imagine Gregorian chamber music).


View of Jerusalem
Stack Overflow is the big city in the middle. Because the programmer-city worked so well, people wanted to ask questions about other subjects, so we let them build other Q&A villages in the catchment area of the programmer-city. Some of these Q&A villages became cities of their own. The math cities barely even have any programmers and they speak their own weird language. They are math-Jerusalem. They makes us very proud. Even though they don’t directly serve programmers, we love them and they bring a little tear to our eyes, like the other little villages, and they’re certainly making the Internet—and the world—better, so we’re devoted to them.

One of these days some of those villages will be big cities, so we’re committed to keeping them clean, and pulling the weeds, and helping them grow.

But let’s go back to programmer Jerusalem, which—as you might expect—is full of devs milling about, building the ENTIRE FUTURE of the HUMAN RACE, because, after all, software is eating the world and writing software is just writing a script for how the future will play out.

So given the importance of software and programmers, you might think they all had wonderful, satisfying jobs that they love.

But sadly, we saw that was not universal. Programmers often have crappy jobs, and their bosses often poke them with sharp sticks. They are underpaid, and they aren’t learning things, and they are sometimes overqualified, and sometimes underqualified. So we decided we could actually make all the programmers happier if we could move them into better jobs.

That’s why we built Stack Overflow Careers. This was the first site that was built for developers, not recruiters. We banned the scourge of contingency recruiters (even if they have big bank accounts and are just LINING UP at the Zion Gate trying to get into our city to feed on programmer meat, but, to hell with them). We are SERVING PROGRAMMERS, not spammers. Bye Felicia.

Which brings us to 2015.

The sites are still growing like crazy. By our measurements, the Stack Exchange network is already in the top 50 of all US websites, ranked by number of unique visitors, with traffic still growing at 25% annually. The company itself has passed 200 employees worldwide, with big plush offices in Denver, New York, and London, and dozens of amazing people who work from the comfort of their own homes. (By the way, if 200 people seems like a lot, keep in mind that more than half of them are working on Stack Overflow Careers).

We could just slow down our insane hiring pace and get profitable right now, but it would mean foregoing some of the investments that let us help more developers. To be honest, we literally can’t keep up with the features we want to build for our users. The code is not done yet—we’re dedicating a lot of resources to the core Q&A engine. This year we’ll work on improving the experience for both new users and highly experienced users.

And let’s not forget Stack Overflow Careers. I believe it is, bar-none, the single best job board for developer candidates, which should  automatically make it the best place for employers to find developer talent. There’s a LOT more to be done to serve developers here and we’re just getting warmed up.

So that’s why we took this new investment of $40m.

We’re ecstatic to have Andreessen Horowitz on board. The partners there believe in our idea of programmers taking over (it was Marc Andreessen who coined the phrase “Software is eating the world”). Chris Dixon has been a personal investor in the company since the beginning and has always known we’d be the obvious winner in the Q&A category, and will be joining our board of directors as an observer.

This is not the first time we’ve raised money; we’re proud to have previously taken investments from Union Square Ventures, Index Ventures, Spark Capital, and Bezos Expeditions. We only take outside money when we are 100% confident that the investors share our philosophy completely and after our lawyers have done a ruthless (sorry, investors) job of maintaining control so that it is literally impossible for anyone to mess up our vision of fanatically serving the people who use our site, and continuing to make the Internet a better place to get expert answers to your questions.

For those of you who have been with us since the early days of Our Incredible Journey, thank you. For those of you who are new, welcome. And if you want to learn more, check out our hott new “about” page. Or ask!

Need to hire a really great programmer? Want a job that doesn't drive you crazy? Visit the Joel on Software Job Board: Great software jobs, great people.

News stories from Thursday 15 January, 2015

Favicon for Doctrine Project 02:00 Cache 1.4.0 Released » Post from Doctrine Project Visit off-site link

Cache 1.4.0 Released

We are happy to announce the immediate availability of Doctrine Cache 1.4.0.

This release fixes a series of performance and compatibility issues in the filesystem-based cache adapters (#16, #50, #55).

New cache adapters for SQlite3 (#32) and Predis (#28) were implemented.

A new ChainCache (#52) was implemented, allowing multiple levels of caching, for performance and efficiency.

New interfaces were introduced, for better interface segregation and improved performance:

  • MultiGetCache (#29)
  • FlushableCache (#48)
  • ClearableCache (#48)

This release also causes the filesystem-based caches to change directory structure for saved files: please clear your file-based caches completely before upgrading.

You can find the complete changelog for this release in the release notes.

You can install the Cache component using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/cache": "1.4.0"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Wednesday 14 January, 2015

Favicon for Web Mozarts 17:39 Resource Discovery with Puli » Post from Web Mozarts Visit off-site link

Two days ago, I announced Puli’s first beta release. If you haven’t heard about Puli before, I recommend you to read that blog post as well as the Puli at a Glance guide in Puli’s documentation.

Today, I would like to show you how Puli’s Discovery Component helps you to build and use powerful Composer packages with less work and more fun than ever before.

The Problem

Many libraries support configuration code, translations, HTML themes or other content in files of a specific format. The Doctrine ORM, for example, is able to load entity mappings from special XML files:

<!-- res/config/doctrine/Acme.Blog.Post.dcm.xml -->
<doctrine-mapping ...>
    <entity name="Acme\Blog\Post">
        <field name="name" type="string" />
    </entity>
</doctrine-mapping>

This mapping, stored in the file Acme.Blog.Post.dcm.xml in our fictional “acme/blog” package, contains all the information Doctrine needs to save our Acme\Blog\Post object in the database.

When setting up Doctrine, we need to pass the location of the *.dcm.xml file to Doctrine’s XmlDriver. That’s easy as long as we do it ourselves, but:

  • What if someone else uses our package? How will they find our file?
  • What if multiple packages provide *.dcm.xml files? How do we find all these files?
  • We need to remove the appropriate setup code after removing a package.
  • We need to adapt the setup code after installing a new package.

Multiply this effort for every other library that uses user-provided files and you end up with a lot of configuration effort. Let’s see how Puli helps us to fix this.

Package Roles

For better understanding, it’s useful to assign two different roles to our packages:

  • Resource consumers, like Doctrine, process files of a certain format.
  • Resource providers, like our “acme/blog” package, ship such files.

Puli connects consumers and providers through a mechanism called resource binding. Resource binding is a very simple mechanism:

  1. At first, the consumer defines a binding type.
  2. Then, one or multiple providers bind resources to these types.
  3. Finally, the consumer fetches all the resources bound to their type and does something with them.

Let’s put on the hat of a Doctrine developer and see how this works in practice.

Discovering Resources

We start by defining the binding type “doctrine/xml-mapping” with Puli’s Command Line Interface (CLI):

$ puli type define doctrine/xml-mapping \
    --description "An XML entity mapping loaded by Doctrine's PuliDriver"

We passed a nicely readable description that is displayed when typing puli type:

Result of the command "puli type"

Great! Now we’ll use Puli’s ResourceDiscovery to find all the Puli resources bound to our type:

foreach ($discovery->find('doctrine/xml-mapping') as $binding) {
    foreach ($binding->getResources() as $resource) {
        // load $resource
    }
}

Remember we’re still wearing the Doctrine developer hat? Let’s put this code into a PuliDriver class so that anybody can easily configure Doctrine to load Puli resources.

Binding Resources

Now, we’ll put on the “acme/blog” developer hat. Let’s bind the XML file from before to Doctrine’s binding type:

$ puli bind /acme/blog/config/doctrine/*.xml doctrine/xml-mapping

The bind command accepts two parameters:

  • The path or glob for the Puli resources we want to bind.
  • The name of the binding type.

We can use puli find to check which resources match the binding:

Result of the command "puli find"

Apparently our XML file was registered successfully.

Application Setup

We’ll change hats one last time. This time, we’ll wear your hat. What do we have to do to use both the “doctrine/orm” package and the “acme/blog” package in our application?

The first thing obviously is to install the packages and the Puli CLI with Composer:

$ composer require doctrine/orm acme/blog puli/cli

Once this is done, we have to configure Doctrine to use the PuliDriver:

use Doctrine\ORM\Configuration;
 
// Puli setup
$factoryClass = PULI_FACTORY_CLASS;
$factory = new $factoryClass();
$repo = $factory->createRepository();
$discovery = $factory->createDiscovery($repo);
 
// Doctrine setup
$config = new Configuration();
$config->setMetadataDriverImpl(new PuliDriver($discovery));
 
// ...

With as little effort as this, Doctrine will now use all the resources bound to the “doctrine/xml-mapping” type in any installed Composer package.

Will it though?

Enabled and Disabled Bindings

Automatically loading stuff from all Composer packages is a bit scary, hence Puli does not enable bindings in your installed packages by default. We can see these bindings when typing puli bind:

Result of the command "puli bind"

If we trust the “acme/blog” developer and actually want to use the binding, we can do so by typing:

$ puli bind --enable 653fc9

That’s all, folks. :) Read more about resource discovery with Puli in the Resource Discovery guide in the documentation. And please leave me your comments below.

News stories from Monday 12 January, 2015

Favicon for Web Mozarts 20:59 Puli 1.0 Beta Released » Post from Web Mozarts Visit off-site link

Today marks the end of a month of very intense development of the Puli library. On December 3rd, 2014 the first alpha version of most of the Puli components and extensions was released. Today, a little more than a month later, I am proud to present to you the first beta release of all the libraries in the Puli ecosystem!

What is Puli?

If you missed my previous blog post, you are probably wondering what this Puli thing is. In short, Puli (pronounced “poo-lee”) is a toolkit which lets you map paths of a virtual resource repository to paths in your Composer package. For example, as the developer of the “acme/blog” package, I can map the path “/acme/blog” to the “res” directory in my package:

$ puli map /acme/blog res

After running this command, I can access all the files in my “res” directory through the Puli path “/acme/blog”. For example, if I’m using Puli’s Twig extension:

// res/views/post.html.twig
echo $twig->render('/acme/blog/views/post.html.twig');

But not only I can do this. Every developer using my package can do the same. And I can use the Puli paths of every other package. Basically, Puli is like PSR-4 autoloading for anything that’s not PHP.

You should read the Puli at a Glance guide to learn more about Puli’s exciting possibilities.

The Puli Components

Puli consists of a few core components that implement Puli’s basic functionality. First, let’s talk about the components that you are most likely to integrate into your applications and libraries:

  • The Repository Component implements a PHP API for the persistent storage of arbitrary resources in a resource repository:
    use Puli\Repository\FilesystemRepository;
    use Puli\Repository\Resource\DirectoryResource;
     
    $repo = new FilesystemRepository();
    $repo->add('/config', new DirectoryResource('/path/to/resources/config'));
     
    // /path/to/resources/config/routing.yml
    echo $repo->get('/config/routing.yml')->getBody();
  • The Discovery Component allows you to define binding types and let other packages bind resources to these types. Read the Resource Discovery guide in the documentation to learn more about this topic.
  • The Factory Component contains a single interface PuliFactory. This interface creates repositories and discoveries for you. You can either implement the interface manually, or – and that’s what you usually do – let Puli generate one for you.

Next come the components that you use as a developer in your daily life:

  • The Command Line Interface (CLI) lets you map repository paths, browse the repository, define binding types and bindings and much more by typing a few simple commands in your terminal. The CLI also builds a factory that you can use to load the repository and the discovery in your code:
    $factoryClass = PULI_FACTORY_CLASS;
    $factory = new $factoryClass();
     
    // If you need the resource repository
    $repo = $factory->createRepository();
     
    // If you need the resource discovery
    $discovery = $factory->createDiscovery($repo);

    The configuration that you pass to the CLI is stored in a puli.json file in the root of your Composer package. This file should be distributed with your package.

  • The Composer Plugin loads the puli.json files of all installed Composer packages. Through the plugin, you can access any of the resources and bindings that come with any of the libraries you use.
  • The Repository Manager implements the actual business logic behind the CLI and the Composer Plugin. This is Puli’s workhorse.

The Puli Extensions

Currently, Puli features a few extensions that are mostly targeted at the Symfony ecosystem, because – quite simply – that’s the framework I know best. As soon as the first stable release of Puli is out, I would like to work on extensions for other PHP frameworks, but I could need your help with that.

The following extensions are currently available:

Supporting Libraries

During Puli’s development, I created a few small supporting libraries that I couldn’t find in the high quality that I needed to build a solid foundation for Puli. These libraries also had their release today:

  • webmozart/path-util provides robust, cross-platform utility functions for normalizing and transforming filesystem paths. After using it for a few months, I love its simplicity already. I highly recommend to give it a try.
  • webmozart/key-value-store provides a simple yet robust KeyValueStore interface with implementations for various backends.
  • webmozart/json is a wrapper for json_encode()/json_decode() that normalizes their behavior across PHP versions and features integrated JSON Schema validation.
  • webmozart/glob implements Git-like globbing in that wildcards (“*”) match both characters and directory separators. I was made aware today that a similar utility seems to exist in the Symfony Finder component, so I’ll look about merging the two packages.

Road Map

I would like to release a stable version of the fundamental Repository, Discovery and Factory components by the end of January 2015. These components are quite stable already and I don’t expect any serious changes.

The CLI, Composer Plugin and Repository Manager are a bit more complex. They have undergone heavy changes during the last weeks. All the functionality that is planned for the final release is implemented now, but the components need testing and polishing. I plan to release a final version of these packages in February or March 2015.

Feedback Wanted

To permit a successful stable release, I need your feedback! Please integrate Puli, test it and use it. However – as with any beta version – please don’t use it in production.

Read Puli at a Glance and Getting Started to get started. Happy coding! :)

Please leave me your feedback below. Follow PuliPHP on Twitter to receive all the latest news about Puli.

Favicon for Doctrine Project 02:00 DBAL 2.4.4 and 2.5.1 released » Post from Doctrine Project Visit off-site link

DBAL 2.4.4 and 2.5.1 released

We are happy to announce the immediate availability of Doctrine DBAL 2.4.4 and 2.5.1. Various bugs that prevented users from upgrading to DBAL 2.5.x have been fixed in DBAL 2.5.1, along with some others. DBAL 2.4.4 only contains backported bug fixes addressed since the release of DBAL 2.5.0.

The index renaming feature introduced in DBAL 2.5.0 caused trouble for some MySQL users utilizing ORM’s schema tool to upgrade their schemas where the schema tool was generating invalid DROP INDEX / CREATE INDEX SQL. Fixing this issue was only possible by introducing a minor BC break (change in behaviour), please see UPGRADE.md for more information (we know that this is a patch release but we are actually reverting a BC break here). MariaDB users had problems upgrading to DBAL 2.5.0 because of the new platform detection feature not taking MariaDB backends into account. Additionally, some minor bugs around schema creation / introspection and DDL in various platforms were fixed.

Please note that this release does not yet include a fix for users having problems with the DBAL connection not being lazy anymore since DBAL 2.5.0 when retrieving the underlying platform. Unfortunately we did not find a good solution for this issue yet. Until the issue is fixed please directly set the platform version as a workaround by using the serverVersion configuration option described in the documentation if you are encountering any problems.

You can find all the changes on JIRA:

You can install the DBAL using Composer and the following composer.json contents:

{
    "require": {
        "doctrine/dbal": "2.4.4"
    }
}
{
    "require": {
        "doctrine/dbal": "2.5.1"
    }
}

Please report any issues you may have with the update on the mailing list or on Jira.

News stories from Friday 09 January, 2015

Favicon for Grumpy Gamer 18:49 I Was A Teengage Lobot » Post from Grumpy Gamer Visit off-site link

This was the first design document I worked on while at Lucasfilm Games. It was just after Koronis Rift finished and I was really hoping I wouldn't get laid off.  When I first joined Lucasfilm, I was a contractor, not an employee. I don't remember why that was, but I wanted to get hired on full time. I guess I figured I'd show how indispensable I was by helping to churn out game design gold like this.

This is probably one of the first appearances of "Chuck", who would go on to "Chuck the Plant" fame.

You'll also notice the abundance of TM's all over the doc. That joke never gets old.  Right?

Many thanks to Aric Wilmunder for saving this document.

Shameless plug to visit the Thimbleweed Park Development Diary.

lobots_1_thumb.jpglobots_2_thumb.jpglobots_3_thumb.jpglobots_4_thumb.jpglobots_5_thumb.jpglobots_6_thumb.jpglobots_7_thumb.jpglobots_8_thumb.jpglobots_9_thumb.jpglobots_10_thumb.jpglobots_11_thumb.jpglobots_12_thumb.jpglobots_13_thumb.jpglobots_14_thumb.jpglobots_15_thumb.jpglobots_16_thumb.jpglobots_17_thumb.jpglobots_18_thumb.jpg

News stories from Friday 02 January, 2015

Favicon for Grumpy Gamer 01:40 Thimbleweed Park Development Diary » Post from Grumpy Gamer Visit off-site link

The Thimbleweed Park Development Diary is now live. Updated at least every Monday, probably much more.

News stories from Wednesday 31 December, 2014

Favicon for ircmaxell's blog 21:00 2014 - A Year In Review » Post from ircmaxell's blog Visit off-site link
Wow, another year gone by. Where does the time go? Well, considering I've written a year-end summary the past 2 years, I've decided to do it again for this year. So here it is, 2014 in review:

Read more »
Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=m0PoTupaxoE:d2m2IlmxsIY:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Tuesday 30 December, 2014

Favicon for ircmaxell's blog 20:00 PHP Install Statistics » Post from ircmaxell's blog Visit off-site link
After yesterday's post, I decided to do some math to see how many PHP installs had at least 1 known security vulnerability. So I went to grab statistics from W3Techs, and correlated that with known Linux Distribution supported numbers. I then whipped up a spreadsheet and got some interesting numbers out of it. So interesting, that I need to share...
Read more »
Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=H1qAwc2XIaU:IUc8Wb9t7aI:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Monday 29 December, 2014

Favicon for ircmaxell's blog 22:00 Being A Responsible Developer » Post from ircmaxell's blog Visit off-site link
Last night, I was listening to the combined DevHell and PHPTownHall Mashup podcast recording, listening to them discuss a topic I talked about in my last blog post. While they definitely understood my points, they for the most part disagreed with me (there was some contention in the discussion though). I don't mind that they disagreed, but I was rather taken aback by their justification. Let me explain...

Read more »
Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:4cEx4HpKnUU Ircmaxell?d=yIl2AUoC8zA Ircmaxell?i=IPN9TacOGaE:1NYd5VRCUnE:V_sGLiPBpWU Ircmaxell?d=qj6IDK7rITs

News stories from Friday 26 December, 2014

Favicon for #openttdcoop 00:35 New member: Hazzard » Post from #openttdcoop Visit off-site link

Hell000 and Merry Christmas! We are happy to announce that our inner circles have gained yet another person, Hazzard!

Being around for a long while, most of you probably know him, but if you don’t, Hazzard is a great builder and person. His logic mechanisms and other construction put your brains in greater hazard when you see them. He has been generally very helpful, teaching people, being a nice person, and everything else.

Everybody, please welcome Hazzard to the openttdcoop members club!

News stories from Wednesday 24 December, 2014

Favicon for Grumpy Gamer 22:54 Happy Holidays » Post from Grumpy Gamer Visit off-site link

happy_holidays_2014.png

News stories from Monday 22 December, 2014

Favicon for nikic's Blog 02:00 PHP's new hashtable implementation » Post from nikic's Blog Visit off-site link

About three years ago I wrote an article analyzing the memory usage of arrays in PHP 5. As part of the work on the upcoming PHP 7, large parts of the Zend Engine have been rewritten with a focus on smaller data structures requiring fewer allocations. In this article I will provide an overview of the new hashtable implementation and show why it is more efficient than the previous implementation.

To measure memory utilization I am using the following script, which tests the creation of an array with 100000 distinct integers:

$startMemory = memory_get_usage();
$array = range(1, 100000);
echo memory_get_usage() - $startMemory, " bytes\n";

The following table shows the results using PHP 5.6 and PHP 7 on 32bit and 64bit systems:

        |   32 bit |    64 bit
------------------------------
PHP 5.6 | 7.37 MiB | 13.97 MiB
------------------------------
PHP 7.0 | 3.00 MiB |  4.00 MiB

In other words, arrays in PHP 7 use about 2.5 times less memory on 32bit and 3.5 on 64bit (LP64), which is quite impressive.

Introduction to hashtables

In essence PHP’s arrays are ordered dictionaries, i.e. they represent an ordered list of key/value pairs, where the key/value mapping is implemented using a hashtable.

A Hashtable is an ubiquitous data structure, which essentially solves the problem that computers can only directly represent continuous integer-indexed arrays, whereas programmers often want to use strings or other complex types as keys.

The concept behind a hashtable is very simple: The string key is run through a hashing function, which returns an integer. This integer is then used as an index into a “normal” array. The problem is that two different strings can result in the same hash, as the number of possible strings is virtually infinite while the hash is limited by the integer size. As such hashtables need to implement some kind of collision resolution mechanism.

There are two primary approaches to collision resolution: Open addressing, where elements will be stored at a different index if a collision occurs, and chaining, where all elements hashing to the same index are stored in a linked list. PHP uses the latter mechanism.

Typically hashtables are not explicitly ordered: The order in which elements are stored in the underlying array depends on the hashing function and will be fairly random. But this behavior is not consistent with the semantics of PHP arrays: If you iterate over a PHP array you will get back the elements in the exact order in which they were inserted. This means that PHP’s hashtable implementation has to support an additional mechanism for remembering the order of array elements.

The old hashtable implementation

I’ll only provide a short overview of the old hashtable implementation here, for a more comprehensive explanation please see the hashtable chapter of the PHP Internals Book. The following graphic is a very high-level view of how a PHP 5 hashtable looks like:

basic_hashtable.svg

The elements in the “collision resolution” chain are referred to as “buckets”. Every bucket is individually allocated. What the image glosses over are the actual values stored in these buckets (only the keys are shown here). Values are stored in separately allocated zval structures, which are 16 bytes (32bit) or 24 bytes (64bit) large.

Another thing the image does not show is that the collision resolution list is actually a doubly linked list (which simplifies deletion of elements). Next to the collision resolution list, there is another doubly linked list storing the order of the array elements. For an array containing the keys "a", "b", "c" in this order, this list could look as follows:

ordered_hashtable.svg

So why was the old hashtable structure so inefficient, both in terms of memory usage and performance? There are a number of primary factors:

  • Buckets require separate allocations. Allocations are slow and additionally require 8 / 16 bytes of allocation overhead. Separate allocations also means that the buckets will be more spread out in memory and as such reduce cache efficiency.
  • Zvals also require separate allocations. Again this is s