Why .dev domains redirect to HTTPS in Chrome and Firefox

Thanks to Mattias Geniar for solving this frustrating mystery for me.

In short, `.dev` isn’t only a convenient “fake” top level domain for use in local web development; it’s also  a legitimate top level domain owned by Google. In December of 2017 Chrome and Firefox were updated to force all requests to .dev hosts to load securely.

What should we do differently?

One workaround is to create a self-signed certificate and add it to your local machine’s trusted certificate store. That sounds like a lot of work to me.

The lazier way is to use something other than .dev for local development. I like dev.example.com, but I think I’ll go with example.test, as Mattias suggests.

Chrome & Firefox now force .dev domains to HTTPS via preloaded HSTS

Stop using 4 digit numeric passwords on your phone

a 4-digit numeric passcode would only take 34 minutes to brute force, while an 8 digit alphanumeric passcode would still take over a million years

Source: [FREE] Apple Versus the FBI, Understanding iPhone Encryption, The Risks for Apple and Encryption – Stratechery by Ben Thompson

This is a very good read on Apple’s fight with the FBI over what are appropriate measures to access data on an alleged terrorist’s phone.

Proposal: Designate commits as minor edits in Git

Git blame: A reliable rat

A great benefit of version control systems is that they make it possible to see who introduced substantive changes in the past. For example in Git, git blame <file> will reveal who last edited each line of code in <file>.

Despite the cheeky name, the greatest value of git blame isn’t so much blaming others for their mistakes, as identifying who to confer with when proposing changes. The last developer to touch a line of code may have an interest in its current state, can answer questions about it, and may have valuable perspective that will improve your proposed changes.

Slow standards adoption? Blame blame.

Unfortunately, this is an obstacle to the adoption of consistent code standards in an open source project like WordPress.

Any patches you make to legacy code whose soul purpose is applying coding standards, without introducing substantive changes, will make you appear as the last author in git blame, losing valuable information about whoever made the last substantive changes. Thus, this type of edits is discouraged.

As a result, WordPress’ adoption of its own coding standards in core code slows way down.

This is a bummer, because there would be dozens of people happy to make core contributions strictly to apply code standards. It’d be a great way for newbies to learn the ropes while making incremental improvements to code quality.

How about “minor commits” that blame is blind to?

Wouldn’t it be nice if you could indicate that a change is minor edit when you commit it? git blame would skip over these minor edits to display only substantive edits from older, non-minor commits. Obstacle to code standards adoption solved.

This could look something like $ git commit --minor <file to commit>.

Implementation considerations (wherein I wade way out past my depth)

For this to work, I’m aware of at least three things that would need to change in Git’s internals:

  1. Implement the --minor flag (or whatever) in git commit
  2. Extend data model in commit blobs (the files where Git stores its object data) to include optional metadata that means “this is a minor edit”.
  3. Make git blame aware of “this is a minor edit” metadata and crawl as far up the tree as needed to encounter an edit that is not minor.

Number 3 would add a bit of performance overhead to running git blame. I could be way off here, but I doubt that’s a deal breaker.

Number 2 might be, though. The structure of commit blobs is super lean — just a reference to a tree describing the current file structure, the commit’s author, the commit message, and a reference to parent commit object(s). Nothing more. Thus, adding metadata to support this type of feature could increase every commit’s size by a significant percentage, and that would add up when applied to an entire repository’s object graph. Would that be justified by the limited utility that a minor edit functionality would add?

Perhaps this isn’t such a big issue, as that “minor” metadata flag could either be set to true, or be nonexistent and implied to be false. This would only take up more hard disk in the cases where minor = true, instead of with 100% of commits.

Applying this to WordPress

I wrote this up with Git examples because I’m much more familiar with it, but WordPress still uses SVN for core development, and probably will for some time.

So until and unless WordPress completely migrates to Git, we’d also need an equivalent new “minor edit” feature added to SVN if we were e to benefit fully in WP developer land.

WCSEA 2015: Panel — WordPress at Scale: Enterprise, Media, and Education

I’m at WordCamp Seattle today and will be post­ing notes from ses­sions through­out the day. These are posted right after the session, and could be a little rough.

This was a panel discussion moderated by Grant Landram, with Evan Cordulack, Jeremy Felt, Josh Kadis, and Nathan Letsinger. Follow those links to their Twitter pages. Here is the WordCamp.org session description.

Introductions

ModeratorGrant Landram: Senior project manager at 10up.com, previously with FreshMuse.

Jeremy Felt: Platform Manager at Washington State University. Previously senior engineer at 10up.

Josh Kadis: Internal and external product development at Alley Interactive, and Seattle dev meetup organizer. Previously the senior technologist at Quartz.

Evan Cordulack: Engineering manager for the Seattle Times.

Nathan Letsinger: Product lead at Grist, designer and developer.

Topic 1: What is “Scale”?

What does scale mean or look like with you and your organization?

Josh: Scale means traffic, pageviews, load on server. But there’s also internal users, scale of code base, database.

Evan: Scale means the bigger the database gets, then the things we thought WP was good at out of the box left us with some hangups. We still fight with that every day.

Nathan: Scale means we needed to deal with just the shear number of users. 2 dozen or 600 users who need to login raises a lot of concerns. Could be security. Many people logging in at the admin level means you should evaluate security. We have 700 bylines for posts, and didn’t want that many fake users in WP.

What special considerations do you have?

Jeremy: When you’re no longer running a single WP site, a lot of caching considerations get really complicated. Also a lot depends on your budget. With more money, have a sysadmin work on this for you. If you don’t, there’s a lot you can get away with on a single Linode with Memcached. Even if you’re paying someone to do this for you, it makes sense to understand what they’re doing for you, so you can write code that works well in sync with the systems you have in place.

Josh: The use case most people have for WP is one where every non-logged-in user gets the same HTML markup. If you’re working on a BuddyPress site, then that’s a totally different story. But normally, traffic can quickly become a non-issue with full page caching.

Evan: We try to use caching at every level. We use Akamai, then Varnish, Memcached… and at the WP API level, we use transients everywhere to take advantage of the Memcached object cache. We managed to totally kill page loading speed early on in develompent. Once we started taking advantage of caching APIs, it shaved off seconds. There are things built-in that get you really far, really fast — whatever level you’re at, even if you don’t have object systems in place. And it’ll make you feel like a pro.

A lot of managed hosts disable caching for logged-in users. So if you have high logged-in user traffic, you need to be more strategic about your caching strategy.

Nathan: You want to worry about caching. For us that meant full page caching. A story about this: Before we were on WP, we had that problem. Our best days for editors had huge traffic spikes. Those were our worst days for our developers, when we were having to spin up new servers, etc. We learned about Batchache from the WordPress community, and ported it over to our old system. But in the process we learned enough about the WP ecosystem that it convinced us to embrace WP generally.

Nathan: You need to know the constraints your caching systems should have in place, so that you know what not to do to break things. For example, we can’t add database tables without also having to modify our caching systems to accommodate those. I actually found these constraints freeing.

What other tools are you using?

How do you make choose which systems to use?

Nathan: It also depends on your passion. We’re not sysadmins, but we were playing that role part time. Can we hire someone to do this for us? We decided to pay somoene to just solve this for us. Even if we felt we were paying too much, there’s a lot of value in peace of mind. So when your traffic is spiking on a Saturday, I can keep sipping my Manhatten and not worry about it.

Topic 2: Scaling Strategies

Dev Team Sizes for your organizations

Jeremy: 20 or so on the team doing content, design, dev, everything. We’re supporting 600 sites and around 1,000 users. Central univ support.

Josh: We have 40 people at Alley Interactive including non devs (PMs, design, etc.). On a given project we may have as few as a dev and a PM. But sites we support with caching strategies are as large as the NY Post, which is one of the largest WP sites anywhere. They’re hosted on WP.com VIP which handles scaling, traffic, caching in the same way for us as it does for anyone else. That enables us to be the dev team for them with a relatively small team because we don’t need to worry about it.

Evan: Dozen devs. While there are a lot of users in the system at a time, there are also content being injected from various places constantly.

Nathan: It’s 2 devs and myself at Grist.

Key parts of your tool set (tech, process, organizational) for scaling

Jeremy: Have a local environment matching production as closely as possible is important. Then after that I rely on Ngynx and MySQL (although Zach Brown would recommend alternatives to MySQL like MariaDB).

Josh: For sites with very large database, e.g. New York Post which has a million database rows, we use Elasticsearch for queries (not just user-facing search). Even if you already know the ID of a post, in a large database a post-specific query can be expensive.

Nathan: Lots of users writing means you need something like a calendar for scheduling, and some kind of chat program so everyone’s in touch w/ each other. This is pretty essential for the overall team.

The Reliability of Elasticsearch

[A question from me, hence the greater detail in this document:] Josh mentioned depending on Elasticsearch for Queries. Kyle Kingsbury posted some research into Elasticsearch last year that raised questions about its reliability. Have you encountered that or had to deal with it in production?

Jeremy: Automattic makes heavy use of Elasticsearch, with people doing the work full time. If they can trust it and they’re using it at insane scale, even though they have to rebuild their indexes occasionally, then it’s probably fine.

Evan: There are a lot of queries that are surprisingly taxing at scale. Times Elasticsearch has failed for us are usually times that we did something wrong. You can write integration stuff that’ll make your life easier. Inevitably you’ll have to reindex and it’ll be a drag.

Zach Brown, from the audience: Basically, don’t use it as canonical. It’s just a way to access info quickly.

Workflow for content to moves from outside of WP into WP

Jeremy: Even though we think TinyMCE and DFW are neat, nobody uses it, because people pass things around in Word and then copy/paste them into WP.

Josh: At Quartz it became the policy that stories had to be written in WP. I could still tell who wasn’t doing that. But we were able to help allay concerns from journalists just because people get a little put off by the interface because if it’s a little off-putting to them, it’ll become familiar eventually. “Word was new to you too, once, and this will get better eventually.”

Evan: We don’t have anything like an editorial calendar, and we don’t have a good way to do this.

[After a Follow up question from audience about content/workflow policy governance and enforcement:] We rely from WP for curation; what shows up where. That’s where WP does the most work for us. Revisions of the story will happen in a different editing platform, reason being we have to feed the printing press. Some would like to do digital first, but “that is not my department”.

Nathan: Autosave has been great for the concern about browsers crashing.

Do you run your sites through performance analyzer tools?

Jeremy: I obsess over websitetest.org. On launch days I’m refreshing every 10 minutes trying new things, trying to get others out.

Josh: We’re taking the opposite approach: Starting a project by structuring build tasks, so that when you get to the end, Grunt or Gulp or whatever you’re using has already been done. [In other words, this is a philosophy of beginning with high performance as your baseline, rather than revising to get it right at the end.]

Evan: Since these tools are easy to use once you show someone, then everyone cares about performance. This is good and bad because meetings get really weird really fast (talking about which DOM events matter with non-devs is strange), but good to have everyone considering it.

Do you use tools to find performance issues specifically in code?

Jeremy: John Blackbourn’s Query Monitor plugin. Using things like xdebug or PHPStorm with step debugging can identify loops that are running multiple times and slowing things down.

Josh: Peer code review is huge for performance because you can rely on the experience of your team, seeing what things are slow, what’s caused problems for them in the past, etc. Leverage your embodied experience.

Besides plugin updates and installs, what changes when you move to a multiserver load-balanced environment

Jeremy: I’m only running one VM, but… it becomes more to manage, more pipes to keep connected, so more things to worry about going down. But, scaling horizontally by having more redundant things can be a less painful thing where everything’s beefy and fast…

Evan: We have a ton of internal conversations about this, and everyone has an opinion. Whatever it is, take your time. If you’re using a cloud platform, you can experiment with scaling things up and down. (If you’re having to do this in production, that’s another story.)

For large editorial teams, how do you manage training as you make continual changes?

Jeremy: Every Friday morning we have an open lab on campus for people to come talk. We say “we’re working on this, here’s a preview” and we get good feedback.

Josh: It’s important to treat dev of internal WP functionality as a UX problem in the same way you would treat a design change on the front-end. Without completely redesigning the admin interface there is still a lot you can do to make a feature intuitive and easy for people to figure out (or totally baffling and unintuitive). There’s a degree of design thinking you can apply to internal plugin dev that I’ve found helpful — even briefly and semi-formally for an internal plugin can really help speed up adoption.

Evan: If new changes require one-time actions before you can use them or before the UX returns to normal, and people don’t know about it, that upsets people. It’s important to give people a heads-up about changes that are coming.

Architecture constraints migrating to WP from other platforms

Josh: The constraint is less about the total number of content objects than it is about the differences in information architecture. The great thing about migrations is that you’re typically not that concerned with performance (within reason). So when you start the migration script if you’re talking about 6 hours or 10 hours to finish, it doesn’t matter so much… But if you can’t get a clean map of information architecture, that can be difficult.

Evan: We had a lot of problems with DB performance. It would get to a certain size where we were timing out, particularly around post_meta. It was failing in mysterious ways. We were in a hosted environment where there were settings we didn’t have access to. Lots of little roadblocks to clear, remove one, hit another, so we had to get through a bunch of that.

Nathan: We use EditFlow, which includes the editorial calendar. There are many media companies and even mid-sized blogs using this who need a calendar. This is an area for growth. We’re testing CoSchedule, which is monthly paid service, and has great features, but is off of WP’s architecture. I think this is an area where media companies have a need.

Have you encountered bloat from leaving revision management active?

Josh: On one occasion I’ve run a wp-cli script to clear out revisions on very old posts.

Nathan: That’s WordPress.com VIP’s problem. :)

What do we need to know about what’s coming?:

Jeremy: The easy answer is the JSON REST API and having everything available to you via JSON.

Nathan: Improvements to taxonomy terms will open a lot of opportunities.

This post is part of the thread: 2015 WordCamp Seattle Live Notes – an ongoing story on this site. View the thread timeline for more context on this post.

Standardizing web development pricing

Update: Thanks in part to the comments, which completely panned this idea, I’ve come to see this proposal is a bad one. I’m preserving it here for the record, but it no longer reflects my thinking.

If doctors can be made to charge less and provide better service for their patients, however uncomfortable for them, then maybe so can web designers.

One nifty thing about Obamacare: More checklist use in hospitals

My cousin in-law shared this Planet Money segment about Obamacare driving checklist adoption in hospitals as a cost-cutting measure. The whole thing is good, but they mention checklists specifically at around the 7:30 mark.

The upshot: New rules for medicare payments change the incentives for hospitals to cut costs beginning in January. Instead of being paid for each procedure, which incentivizes more procedures (say, to fix complications from the first procedure), some doctors and hospitals in experimental programs will be paid lump sums to fix given patient problems. More procedures to fix complications will no longer result in higher compensation. This means the care will both cost less and tend to be better.

In reaction to this, more hospitals are implementing checklists than before, because checklists are shown to dramatically reduce complications from surgery. Lives were on the line before this change, and will continue to be, but making doctors directly accountable in a real financial way will do more to encourage the use of checklists (and other best practices) than simply having available the knowledge that they work.

Similar issues in the web software service industry

At Rocket Lift, as is the case generally in our industry, our incentives are out of whack. Our client’s lives may not be on the line, but when we make mistakes that create more work, we often charge more to fix them. Just like with the doctors in the NPR segment, we never try to exploit this. Just like the doctors, the nature of our work means there will always be unforeseen complications, and we can acknowledge that the incentive structure is wrong. And just like the doctors, this is an uncomfortable reality for us to confront. But we ought to.

Solving this in a way that links billing to fixing problems, and benefitting both us and our clients, is very challenging. Frankly, I don’t know how to do it. But I continue to believe that our industry needs to act more professional if we want to be treated like adults.* Just as doctors are facing the hard questions about cost-saving, we web professionals ought to, as well — and can, for the benefit of our clients and the world, with all of the latent potential for better technology to improve lives.

Doctors and hospitals are having this difficult conversation because Medicare regulations are changing. Absent government rules (all joking about the Healthcare.gov debacle aside, we’re not about to be cost-regulated by the feds), what could this look like in our field? Flat fees are untenable, because too much of our work is fundamentally novel and cloaked in unknowns. Not-to-exceed commitments threaten to sink service businesses. Variations on hourly billing are the only sane choice for web contractors in most cases.

A Proposal

I invite feedback from our web development peers and customers of our industry on this idea:

I propose a standard cost schedule, with not-to-exceed guarantees for certain tasks based on task requirements (e.g. which browsers are supported?) and known — and unknown — factors that impact the likely time required (tools used, quality of existing code base, etc.). I imagine an open database of tasks, estimates, and actual time spent, contributed to by many of our peers. This would allow us to peg the “right” cost for a given task with given circumstances, based on industry consensus. This would be a neutral, data-driven, third-party arbiter setting and validating estimates of time required. It would play a role similar to the one Medicare plays for doctors, but for time instead of cost, and on a voluntary basis between contractors and clients.

With this database and resource to refer to, contractors could still charge whatever they’d like, provided clients were willing to pay. We would compete on a mix of factors including hourly rate, track record, culture, process, specialty, need/service fit, etc. …

I’ll admit I cringe to think of having to estimate and bill according to a standard schedule of codes for different tasks. “HTML and CSS for a single custom dynamic WordPress template, with five distinct page Components and deliverable SASS source code, minified… Let’s see, is that five units of code SWC-24 (20 hours), or a single unit of HUI-23 (18 hours)?”. Ick! But I believe we could do much better and avoid that nightmare, with enough investment of thought and energy to build a voluntary system for technologists, by technologists (instead of bureaucrats).

I think this could help us to structure pricing for web work in novel ways that would reduce “complications” and protect clients from cost overruns. Two other benefits: shortcutting the time and energy developers spend trying to figure out what they are worth, and giving them the confidence to in fact charge what they are worth.

What are your thoughts?


*If there’s any question about our industry’s desire to be treated like professionals, consider the energy we’ve spent producing films lampooning The Vendor Client Relationship in Real World Situations, and the popularity of Mike Monteiro’s resources for web designers to value themselves and grow a backbone.

By the way, this post isn’t meant to focus on Obamacare. I’ll delete comments picking a fight about it, so no need to waste your time there, folks. :-)

Douglas Adams on the Internet

Interactivity. Many-to-many communications. Pervasive networking. These are cumbersome new terms for elements in our lives so fundamental that, before we lost them, we didn’t even know to have names for them

— Douglas Adams in a remarkably good essay from years ago on how elements of the Internet are perfectly natural for humans. Worth reading start to finish.

Git Submodule Cheatsheet

I’m aware git submodules aren’t awesome, but a lot of what makes them a pain is having to remember the arcane sequence of commands to invoke when using them in a collaborative team project. I’m creating this cheat sheet for my own reference. If you find it useful too, or have suggestions for improving it (or if you spot errors to correct), let me know.

Adding a submodule to a project

$ cd <repo root>
$ git submodule add <readable remote submodule repo> <relative local path to install target>

Making changes to a submodule

Here we want to first push our changes to the submodule’s upstream repo, and then record the change in the parent project repo. It’s very important to not skip the first part, as that would break the submodule for other developers when they pull changes to the parent project with an updated reference to a nonexistent state of the submodule repo.

$ cd <submodule path>
// do stuff
$ git commit ... // Changes committed to submodule; parent repo only recognizes that submodule's commit has changed
$ git push ... // Push submodule changes; parent repo unaffected
$ cd <anywhere within parent repo>
$ git commit ... // Commit updated reference to new submodule state (reference by commit)
$ git push ...
// Tell your fellow coders to be sure and update submodules when they next pull.

Cloning a repo with submodules for the first time

After cloning the repo, initialize and update your submodules. git submodule init sets up the repo structure. git submodule update populates submodule files by pulling their commits.

$ git clone ...
$ git submodule init
$ git submodule update

Pulling commits including updates to submodules

$ git pull ...
$ git submodule update

The Brief: Skype is not your Friend

The Brief’s roundup from last Friday includes a good sum up of why you shouldn’t be using Skype. The low down: They’re too surveillance-friendly. Zero out of four stars according to the Electronic Frontier Foundation.

Google (Hangouts) and Apple (FaceTime) both rate more highly, although you shouldn’t ever assume your communications aren’t being monitored. Even Off-The-Record chats can get you into trouble if they’re being logged.