Optimizing WebGL Shaders by Reading D3D Shader Assembly

We are optimizing WebGL shaders for the Intel GMA 950 chipset, which is basically the slowest WebGL-capable device we care about. Unfortunately, it’s a fairly common chipset too. On the plus side, if we run well on the GMA 950, we should basically run well anywhere. :)

When you’re writing GLSL in WebGL on Windows, your code is three layers of abstraction away from what actually runs on the GPU. First, ANGLE translates your GLSL into HLSL. Then, D3DX compiles the HLSL into optimized shader assembly bytecode. That shader bytecode is given to the driver, where it’s translated into hardware instructions for execution on the silicon.

Ideally, when writing GLSL, we’d like to see at least the resulting D3D shader assembly.

With a great deal of effort and luck, I have finally succeeded at extracting Direct3D shader instructions from WebGL. I installed NVIDIA Nsight and Visual Studio 2013 in Boot Camp on my Mac. To use Nsight, you need to create a dummy Visual Studio project. Without a project, it won’t launch at all. Once you have your dummy project open, configure Nsight (via Nsight User Properties under Project Settings) to launch Firefox.exe instead. Firefox is easier to debug than Chrome because it runs in a single process.

If you’re lucky — and I’m not sure why it’s so unreliable — the Nsight UI will show up inside Firefox. If it doesn’t show up, try launching it again. Move the window around, open various menus. Eventually you should have the ability to capture a frame. If you try to capture a frame before the Nsight UI shows up in Firefox, Firefox will hang.

Once you capture a frame, find an interesting draw call, make sure the geometry is from your WebGL application, and then begin looking at the shader. (Note: again, this tool is unreliable. Sometimes you get to look at the HLSL that ANGLE produced, which you can compile and disassemble with fxc.exe, and sometimes you get the raw shader assembly.)

Anyway, check this out. We’re going to focus on the array lookup in the following bone skinning GLSL:

attribute vec4 vBoneIndices;
uniform vec4 uBones[3 * 68];

ivec3 i = ivec3(vBoneIndices.xyz) * 3;
vec4 row0 = uBones[i.x];

ANGLE translates the array lookup into HLSL. Note the added bounds check for security. (Why? D3D claims it already bounds-checks array accesses.)

int3 _i = (ivec3(_vBoneIndices) * 3);
float4 _row0 = _uBones[int(clamp(float(_i.x), 0.0, 203.0))]

This generates the following shader instructions:

# NOTE: v0 is vBoneIndices
# The actual load isn't shown here.  This is just index calculation.

def c0, 2.00787401, -1, 3, 0
def c218, 203, 2, 0.5, 0

# r1 = v0, truncated towards zero
slt r1.xyz, v0, -v0
frc r2.xyz, v0
add r3.xyz, -r2, v0
slt r2.xyz, -r2, r2
mad r1.xyz, r1, r2, r3

mul r2.xyz, r1, c0.z # times three

# clamp
max r2.xyz, r2, c0.w
min r2.xyz, r2, c218.x

# get ready to load, using a0.x as index into uBones
mova a0.x, r2.y

That blob of instructions that implements truncation towards zero? Let’s decode that.

r1.xyz = (v0 < 0) ? 1 : 0
r2.xyz = v0 - floor(v0)
r3.xyz = v0 - r2
r2.xyz = (-r2 < r2) ? 1 : 0
r1.xyz = r1 * r2 + r3

Simplified further:

r1.xyz = (v0 < 0) ? 1 : 0
r2.xyz = (floor(v0) < v0) ? 1 : 0
r1.xyz = (r1 * r2) + floor(v0)

In short, r1 = floor(v0), UNLESS v0 < 0 and floor(v0) < v0, in which case r1 = floor(v0) + 1.

That’s a lot of instructions just to calculate an array index. After the index is calculated, it’s multiplied by three, clamped to the array boundaries (securitah!), and loaded into the address register.

Can we convince ANGLE and HLSL that the array index will never be negative? (It should know this, since it’s already clamping later on, but whatever.) Perhaps avoid a bunch of generated code? Let’s tweak the GLSL a bit.

ivec3 i = ivec3(max(vec3(0), vBoneIndices.xyz)) * 3;
vec4 row0 = uBones[i.x];

Now the instruction stream is substantially reduced!

def c0, 2.00787401, -1, 0, 3
def c218, 203, 1, 2, 0.5

# clamp v0 positive
max r1, c0.z, v0.xyzx

# r1 = floor(r1)
frc r2, r1.wyzw
add r1, r1, -r2

mul r1, r1, c0.w # times three

# bound-check against array
min r2.xyz, r1.wyzw, c218.x
mova a0.x, r2.y

By clamping the bone indices against zero before converting to integer, the shader optimizer eliminates several instructions.

Can we get rid of the two floor instructions? We know that the mova instruction rounds to the nearest integer when converting a float to an index. Given that knowledge, I tried to eliminate the floor by making my GLSL match mova semantics, but the HLSL compiler didn’t seem smart enough to elide the two floor instructions. If you can figure this out, please let me know!

Either way, I wanted to demonstrate that, by reading the generated Direct3D shader code from your WebGL shaders, you may find small changes that result in big wins.

How IMVU Builds Web Services : Part 2

In this 3-part series, IMVU senior engineer Bill Welden describes the means and technology behind IMVU’s web services.

Part 2: Nodes and Edges

The Uniform Contract constraint of the REST discipline means that IMVU web applications are able to provide functionality to many different web applications in a robust and reusable manner.

There are, of course, many different ways of implementing a uniform contract. At IMVU, we have chosen a structured network model – a graph consisting of nodes, which represent the objects of interest to our users, and edges representing the relationships among those objects. The functionality of our web services is then framed in terms of inspecting and manipulating this graph.

For example, here is a network diagram showing some high school enrollment data.

corpblogeng

Two students, Jan and Chris are each enrolled in two classes. They are both in the same Algebra class, but in different English classes.

Each of the students, and each of the classes are represented by nodes, shown here as circles. The relationship between the students and the classes are represented by edges, shown here as lines connecting the circles.

Nodes come in different types. In this diagram we have student nodes and class nodes connected by edges representing course selections.

Nodes have properties, and the type of the node determines the properties. Each student, for example, has a name and an age. Each class has a starting time.

All of the nodes of a given type constitute a node group. This example has two node groups so far, one for students and one for classes. The node group determines the properties of the nodes in the group.

Node groups are important — they distinguish a structured network model from an unstructured one. Network models are usually assumed to be unstructured, where any sort of data can be associated with any node. In fact, data and links are often intermixed, as in a hyperlinked network of text documents.

IMVU REST services are built on a network model, but a structured one, where every node belongs to a node group, and it is the node group – not the node – that determines the specific properties found in each node. In this way, a structured network model is like a relational data model.

Let’s add some teachers.

Engblog2

Janos teaches the 10 AM Algebra class and Mariska teaches both of the English classes. It is useful to group the edges, as they attach to the node, into edge groups:

Engblog3

All of the edges in an edge group attach, at the opposite end, to the same type of node. Each class node, for example, has two edge groups: one for students and one for teachers.

Because edges create relationships between nodes, the edge groups are sometimes named according to the role that the links play. Here we call the edge group listing the students in the class the “roster”, and the symmetrical view the student’s “schedule”. More often, though, edge groups are named simply by the type of node they link to, as with “teachers” here.

We will see later that edges are implemented using directional links – but the edges themselves are not directional, and can be viewed from either end. Jan is enrolled in Algebra and Algebra has Jan as a student.

Nodes have properties, but edges can also have properties. The number of times the student has been late to class is a property of the edge between the student and the class. It is not a property of the student, because the student will have many classes and may not be late to all of them, nor is it a property of the class, because many students will be enrolled in the class, and not all of them will be late.

The edge group determines what kind of node is at the other edge, whether the node has properties, and if so what the properties are.

We model node groups, nodes, edge groups and edges using JSON documents served over HTTPS, and link them using URLs. In my next post, I will go into detail on the structure of these URLs and documents, and follow through on the high school enrollment example.

Up next: Part 3 — Documents and Links

How IMVU Builds Web Services: Part 1

In this 3-part series, IMVU senior engineer Bill Welden describes the means and technology behind IMVU’s web services.

Part 1: REST

REST, or Representational State Transfer, is the model on which the protocols of the World Wide Web are built. It was originally described in the year 2000 in Roy Fielding’s doctoral dissertation, and was developed in order to impose some discipline on distributed hypermedia systems.

The benefits of REST have since proven valuable in defining APIs. Everybody has them these days: Twitter, Facebook, eBay, Paypal. And IMVU has been using REST as a standard for its back end services for many years.

Because the fit between REST (which is framed in terms of documents and links) and database-style applications (tables and keys) is not perfect, everybody means something slightly different when they talk about REST services.

Here is a rundown of the things that IMVU does under the aegis of REST, and the benefits that accrue:

  1. Virtual State Machine

In his dissertation, Fielding’s describes a REST API as a virtual state machine:

The name ‘Representational State Transfer’ is intended to evoke an image of how a well-designed Web application behaves: a network of Web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use.

With each network request, a representation of the application state is transferred, first to the server, and then (as modified) back to the server. This is “representational state transfer” or  REST.

Note that this is a two-phase thing. Once a request goes out from the client, the state of the application is “in transition”. When the response comes back from the server, the state of the client is “at REST”.

By defining a rigorous protocol which frames outgoing requests from clients in terms of following structured links, and the server’s response in terms of structured documents, REST makes it possible to build general, powerful support layers on both the client and the server. These layers then offload much of the work of implementing new services and clients.

  1. Separation of Concerns

Separation of concerns means clients are responsible for interface with the user and servers are responsible for the storage and integrity of data.

This is important in building useful (and portable) services, but it is not always easy to achieve. The aspect ratio, and even the resolution of graphic images ought to be under the control of the client. However, it has been easy at times to design a back-end service for a specific application. For example, static images such as product thumbnails might be provided only in one specific aspect ratio,  limiting the service’s usefulness for other applications. This narrow view of the service limits the ability to roll out new versions and to share the service across products.

When we designed our Server Side Rendering service (which delivers a two-dimensional snapshot of a specific three-dimensional product model), we went to some pains to place this control – height and width of desired image – in the hands of the client through a custom HTTP header included with the service request.

  1. Uniform Contract

A uniform contract means that our services comply with a set of standards that we publish, including a consistent URI syntax and a limited set of verbs. I will talk in more detail about these standards in a future post, but for now note that they allow much of what would otherwise be service specific code – protocols for navigation, for manipulation of data, for security and so forth – to be implemented in generic layers. Just as important, they allow much of the documentation for our services to be consolidated in one place, so that designing becomes a more streamlined and focused process.

With a couple of very specific exceptions, the documents we send and receive are JSON, and structured in a very specific way. In particular, links are gathered together in one place within the document so that the generic software layers are better able to support the server code in generating them, and the application code in following them. This stands in contrast to web browsers which must be prepared to deal with a huge variety of text, graphic and other embedded object formats, and to find links scattered randomly throughout them.

For the API designer, these standards limit the ability to structure data in responses as freely as we might want. Our standards are still in a bit of flux as we try to navigate this tension between freedom of API design and the needs of the generic software layers we have built.

The fact, however, that our standards are based on JSON documents and are requested and provided using HTTP protocols means that the resulting services can be implemented using varied technologies (we have back end services in PHP, Haskell, Python and C++) and accessed from a broad variety of platforms.

The uniform contract also allows ancillary agents to extract generic information from messages, without knowing the specifics of the service implementation. For example, with a single piece of code, IMVU has configured its Istatd statistics collection system to collect real-time data on the amount of time each of our REST points is taking to do its work, and this data collection will now occur for every future endpoint without any initiative on the part of service implementers. The ready availability of these statistics allows for greater reliability through improved response time to outages.

In addition, the design of this uniform contract means that each service addresses a well-defined slice of functionality, allowing new services can be added in parallel with a minimum of disruption to existing code.

  1. HATEOAS

HATEOAS (Hypertext As The Engine Of Application State) is the discipline of treating all resource links as opaque to the clients that use them. Except for one well-known root URL, URLs are not hard coded, and they’re not constructed or parsed by clients. Links are retrieved from the server, and all navigation is done by following links. This allows us to move resources dynamically around our cluster (or even outside if necessary) and to add, refactor and extend services even while they are in active use, changing back-end software only, so that new releases of client software are required less often.

Stateless

Under the REST discipline, applications are stateless – or rather the entire state needed to process a service request is sent with the request (much of it in the form of an identity token sent as a cookie). In this way, between one request and the next, the server needs to know nothing about the client’s application state, and servers do not need to retain state for every active client, which allows us to distribute requests across our cluster according to load.

In this way servers do not need to know what is going on with every active client. Now servers no longer depend on the number and states of all the clients they might be asked to service, allowing the server code to be simpler and to scale well. In addition, we can cache and stage requests at different points in our cluster, keeping them close to where they will be serviced. Responses can also be cached (keyed again by the URL of the request).

Finally, though it is not a stipulation of REST, we use IMVU’s real-time IMQ message queue to push notifications to clients (keyed by the service URL) when their locally cached data becomes invalid. This gives client the information needed to update stale data, but also allows real-time updating of data that is displayed to the user. When one user changes the outfit of their avatar, for example, all of the users in that chat room will see the updated look.

  1. Internet Scale

The internet comes with challenges, and REST provides us with a framework for addressing those challenges in a systematic way, but it is no panacea.

Fielding uses the term “anarchic scalability” to describe these challenges – by which he means the need to continue operating in the face of unanticipated load, or when given malformed or maliciously constructed data.

At IMVU we take issue of load into account from the outset when designing our services, but the internet, as an interacting set of heterogeneous servers and services, often displays complex emergent behavior. Even within our own cluster we have over fifty different kinds of servers (what we call roles), each kind talking to a specific set of other roles, depending on its needs. Under load, failures can cascade and feedback loops can keep the dysfunctional behavior in place even after the broader internet has stopped stressing the system.

Every failure triggers a post-mortem inquiry, bringing together all of the relevant parties (IT, engineering, product management) to establish the history and the impact of the failure. The statistics collected and recorded by Istatd are invaluable in this process of diagnosis.

Remedies are designed to address not only the immediate dysfunctional behavior, but a range of possible similar problems in the future, and can include adding new hardware or modifying software to throttle or redirect traffic. Out of these post-mortems, our design review process continues to mature, adding checklist items and questions which require designers to think about their prospective services from many different perspectives, and particularly through the lens of past failures.

There are times when a failure cannot be fully understood, even with the diagnostic history available to us. Istatd makes it easy to add new metrics which may help in understanding future failures of a similar type. We monitor more than a hundred thousand metrics, and the number is growing.

The chaos injected by the internet includes the contents of packets. Our code, on both the client and server side, is written so that it makes no assumptions about the structure of the data it contains. Even in the case when a document is constructed by our service and consumed by a client we have written, the data may arrive damaged, or even deliberately modified. There is backbone hardware, for example, which attempts to insert advertisements into web pages.

Up next: Part 2 — Nodes and Edges

In my next post I will describe the process we go through to develop a service concept, leading up to implementation.

 

Web Platform Limitations, Part 1 – XMLHttpRequest Priority

The web is inching towards being a general-purpose applications platform that rivals native apps for performance and functionality. However, to this day, API gaps make certain use cases hard or impossible.

Today I want to talk about streaming network resources for a 3D world.

Streaming Data

In IMVU, when joining a 3D room, hundreds of resources need to be downloaded: object descriptions, 3D mesh geometry, textures, animations, and skeletons. The size of this data is nontrivial: a room might contain 40 MB of data, even after gzip. The largest assets are textures.

To improve load times, we first request low-resolution thumbnail versions of each texture. Then, once the scene is fully loaded and interactive, high-resolution textures are downloaded in the background. High-resolution textures are larger and not critical for interactivity. That is, each resource is requested with a priority:

High Priority Low Priority
Skeletons High-resolution textures
Meshes
Low-resolution textures
Animations

Browser Connection Limits

Let’s imagine what would happen if we opened an XMLHttpRequest for each resource right away. What happens depends on whether the browser is using plain HTTP or SPDY.

HTTP

Browsers limit the number of simultaneous TCP connections to each domain. That is, if the browser’s limit is 8 connections per domain, and we open 50 XMLHttpRequests, the 9th would not even submit its request until the 8th finished. (In theory, HTTP pipelining helps, but browsers don’t enable it by default.) Since there is no way to indicate priority in the XMLHttpRequest API, you would have to be careful to issue XMLHttpRequests in order of decreasing priority, and hope no higher-priority requests would arrive later. (Otherwise, they would be blocked by low-priority requests.) This assumes the browser issues requests in sequence, of course. If not, all bets are off.

There is a way to approximate a prioritizing network atop HTTP XMLHttpRequest. At the application layer, limit the number of open XMLHttpRequests to 8 or so and have them pull the next request from a priority queue as requests finish.

Soon I’ll show why this doesn’t work that well in practice.

SPDY

If the browser and server both support SPDY, then there is no theoretical limit on the number of simultaneous requests to a server. The browser could happily blast out hundreds of HTTP requests, and the responses will make full use of your bandwidth. However, a low-priority response might burn bandwidth that could otherwise be spent on a high-priority response.

SPDY has a mechanism for prioritizing requests. However, that mechanism is not exposed to JavaScript, so, like HTTP, you either issue requests from high priority to low priority or you build the prioritizing network approximation described above.

However, the prioritizing network reduces effective bandwidth utilization by limiting the number of concurrent requests at any given time.

Prioritizing Network Approximation

Let’s consider the prioritizing network implementation described above. Besides the fact that it doesn’t make good use of the browser’s available bandwidth, it has another serious problem: imagine we’re loading a 3D scene with some meshes, 100 low-resolution textures, and 100 high-resolution textures. Imagine the high-resolution textures are already in the browser’s disk cache, but the low-resolution textures aren’t.

Even though the high-resolution textures could be displayed immediately (and would trump the low-resolution textures), because they have low priority, the prioritizing network won’t even check the disk cache until all low-resolution textures have been downloaded.

That is, even though the customer’s computer has all of the high-resolution textures on disk, they won’t show up for several seconds! This is an unnecessarily poor experience.

Browser Knows Best

In short, the browser has all of the information needed to effectively prioritize HTTP requests. It knows whether it’s using HTTP or SPDY. It knows what’s in cache and not.

It would be super fantastic if browsers let you tell them. I’ve seen some discussions about adding priority hints, but they seem to have languished.

tl;dr Not being able to tell the browser about request priority prevents us from making effective use of available bandwidth.

FAQ

Why not download all resources in one large zip file or package?

Each resource lives at its own URL, which maximizes utilization of HTTP caches and data sharing. If we downloaded resources in a zip file, we wouldn’t be able to leverage CDNs and the rest of the HTTP ecosystem. In addition, HTTP allows trivially sharing resources across objects. Plus, with protocols like SPDY, per-request overhead is greatly reduced.

What it’s like to use Haskell

By Andy Friesen

Since early 2013, we at IMVU have used Haskell to build several of the REST APIs that power our service.

When the company started, we chose PHP as our application server language, in part, because the founders expected the website to only be a small part of the business!  IMVU was primarily about a downloadable 3D client.  We needed “a website or something” to give users a place to download our client from, but didn’t expect it would have to be much more than that. This shows that predicting the future is hard.
Years later, we have quite a lot of customers, and we primarily use PHP to serve them.  We’re big enough that we run multiple subteams on separate initiatives at the same time.  Performance is becoming important to us not just because it matters to our customers, but because it can easily make the difference between buying 4 servers and buying 40 servers to support some new feature.

So, early in 2012, we found ourselves ready to look for an alternative that would help us be more rigorous.  In particular, we were ready for the idea that sacrificing a tiny bit of short term, straight-line time to market might actually speed us up in the long run.

How We Got Here

I started learning Haskell in my spare time in part because Haskell seems like the exact opposite of PHP: Natively compiled, statically typed, and very principled.

My initial exploration left me interested in evaluating Haskell at real scale.  A year later, we did a live-fire test in which we taught multiple teammates Haskell while delivering an important new feature under a deadline.

Today, a lot of our backend code is still driven by PHP, but we have a growing amount of Haskell that powers newer features. The process has been exciting not only because we got to actually answer a lot of the questions that keep many people from choosing not to try Haskell, but also because it’s simply a better solution.

The experiment to start developing in Haskell took a lot of internal courage and dedication, and we had to overcome a number of, quite rational, concerns related to adopting a whole new language. Here are the main ones and how they worked out for us:

Scalability

The first thing we did was to replace a single service with a Haskell implementation.  We picked a service that was high-volume but was not mission critical.

We didn’t do any particular optimization of this new service, but it nevertheless showed excellent performance characteristics in the field.  Our little Haskell server was running on a pair of spare servers that were otherwise set for retirement, and despite this, each machine was handling about 20x as many requests as one of our high-spec PHP servers could manage.

Reliability

The second thing we did was to take our hands off the Haskell service and leave it running until it fell over.  It ran for months without intervention.

Training

After the reliability test, we were ready to try a live fire exercise, but we had to wait a bit for the right project.  We got our chance in early 2013.

The rules of the experiment were simple: Train 3 engineers to write the backend for an important new project and keep up with a separate frontend team.  Most of the code was to be new, so there was relatively little room for legacy complications.

We very quickly learned that we had also signed up for a lot of catch-up work to bring the Haskell infrastructure inline with what we’ve had for years in PHP.  We were very busy for awhile, but once we got this infrastructure out of the way, the tables turned and the front-end team became the limiting factor.

Today, training an engineer to be productive in our Haskell code is not much harder than training someone to be productive in our PHP environment.  People who have prior functional programming knowledge seem to find their stride in just a few days.

Testing

Correctness is becoming very important for us because we sometimes have to change code that predates every current developer.  We have enough users that mistakes become very costly, very quickly.  Solving these sorts of issues in PHP is sometimes achievable but always difficult.  We usually solve them with unit tests and production alerts, but these approaches aren’t sufficient for all cases.

Unit tests are incredible and great, but you’re always at the mercy of the level of discipline of every engineer at every moment. It’s easy to tell your teammates to write tests for everything, but this basically boils down to asking everyone to be at their very best every day.  People make mistakes and things slip through the cracks.

When using Haskell, we actually remove an entire class of defects that we have to write tests for. Thus, the number of tests we have to write is smaller, and thus there are fewer cases we can forget to write tests for.

We like unit testing and test-driven development (TDD) at IMVU and we’ve found that Haskell is better with TDD, but also that TDD is better with Haskell.  It takes fewer tests to get the same degree of reliability out of Haskell.  The static verification takes care of quite a lot of error checking that has to be manually implemented (or forgotten) in PHP.  The Haskell QuickCheck tool is also a wonderful help for developers.
The way Haskell separates pure computations from side effects let us build something that isn’t practical with other languages: We built a custom monad that lets us “switch off” side effects in our tests.  This is incredible because it means that trying to escape the testing sandbox breaks compilation. While we have had to fight intermittent test failures for eight years in PHP (and at times have had multiple engineers simultaneously dedicated to the problem of test intermittency,) our unit tests in Haskell cannot intermittently fail.

Deployment

Deployment is great. At IMVU, we do continuous deployment, and Haskell is no exception. We build our application as a statically linked executable, and rsync it out to our servers. We can also keep old versions around, so we can switch back, should a deployment result in unexpected errors.

I wouldn’t write an OS kernel in it, but Haskell is way better than PHP as a systems language. We needed a Memcached client for our Haskell code, and rather than try to talk to a C implementation, we just wrote one in Haskell.  It took about a half day to write and performs really well. And, as a side effect, if we ever read back some data we don’t expect from memcached (say, because of an unexpected version change) then Haskell will automatically detect and reject this data.

We’ve consistently found that we unmake whole classes of bugs by defining new data types for concepts to wrap primitive types like integers and strings.  For instance, we have two lines of code that say that “customer IDs” and “product IDs” are represented to the hardware as numbers, but they are not mutually convertible.  Setting up these new types doesn’t take very much work and it makes the type checker a LOT more helpful. PHP, and other popular dynamic server languages like Javascript or Ruby, make doing the same very hard.

Refactoring is a breeze.  We just write the change we want and follow the compile errors.  If it builds, it almost certainly also passes tests.

Not All Sunshine and Rainbows

Resource leaks in Haskell are nasty.  We once had a bug where an unevaluated dictionary was the source of a space leak that would eventually take our servers down.  We also ran into an issue where an upstream library opened /dev/urandom for randomness, but never closed the file handle.  These issues don’t happen in PHP, with its process-per-request model, and they were more difficult to track down and resolve than they would have been in C++.

The Haskell package manager, Cabal, ended up getting in the way of our development. It lets you specify version ranges of particular packages you want, but it’s important for everyone on the team to have exactly the same versions of every package.  That means controlling transitive dependencies, and Cabal doesn’t really offer a way to handle this precisely. For a language that is so very principled on type algebra, it’s surprising that the package manager doesn’t follow suit regarding package versioning. Instead, we use Cabal for basic package installation, and a custom build tool (written in Haskell.)

Hiring

I’ll admit that I was very worried that we wouldn’t be able to hire great people if our criteria was expertise in an uncommon language without a comparatively sparse industrial track record, but the honest truth is that we found a great Haskell hacker in the Bay area after about 4 days of looking.

We had a chance to hire him because we were using Haskell, not in spite of it.

Final Thoughts

While it’s usually difficult to objectively measure things like choice of programming language or softwarestack, we’re now seeing fantastic, obvious productivity and efficiency gains.  Even a year later, all the Haskell code we have runs on just a tiny number of servers and, when we have to make changes to the code, we can do so quickly and confidently.

Charming Python: How to actually close a socket by calling shutdown() before calling close()

By Eric Hohenstein

You may never have to use the Python low-level socket interface but if you ever do, here’s a tip regarding some surprising Python socket behavior that might help you.

The standard Python library has a “socket” module written in Python that wraps an underlying “_socket” module written in C that wraps OS level sockets. Together, these expose a cross-platform Berkeley socket-like interface to Python applications. I say Berkeley socket-like because the socket functions are exposed on an object rather than global free-functions. The surprise with this interface is that the close() function of the socket object does not close the socket. Instead, it de-references it, allowing garbage collection to eventually close the OS-level socket. If you only use the close() method on a socket, the socket will continue using both client and server resources until it is garbage collected. Worse, if the other end is continuing to send data to the locally discarded socket, it may block when the receive window of local socket fills up. Even worse than that, if the software handling the other end of the socket is written in erlang and 0 window packets are dropped somewhere in between (for instance because of overly aggressive firewall rules), the other end may block until the socket gets garbage collected even if the socket is configured to immediately time out send operations if they would block.

The solution is to first call shutdown() on the socket before calling close(). You have to be careful doing this if the socket is being read/written by multiple threads simultaneously but down that path lies madness, so don’t do that.

The Python documentation for the socket module actually does briefly mention this but doesn’t really give much of an explanation as to why it’s necessary. Really, the Python documentation on the subject is correct in that calling shutdown is better form. However, in cases where you are sure that the application state of the connection is no longer valid, it should be possible to bypass the shutdown and close the socket to simply free its resources immediately.

Why is it necessary to call shutdown() before close()? Let’s take a look at the socket.py module from the standard Python library. These code snippets are all taken from Python 2.6 source, though I’ve confirmed that Python 2.7 and Python 3.0 behave similarly. First, socket.py imports everything from _socket, which includes a class called socket:

import _socket

from _socket import * 

Then, it saves a reference to _socket.socket called _realsocket

_realsocket = socket

Then it defines a list of functions that will be exposed from the real socket object to the wrapper class (see below):

_socketmethods = (         

          ‘bind’, ‘connect’, ‘connect_ex’, ‘fileno’, ‘listen’,         
          ‘getpeername’, ‘getsockname’, ‘getsockopt’, ‘setsockopt’,         
          ‘sendall’, ‘setblocking’,         
          ‘settimeout’, ‘gettimeout’, ‘shutdown’) 

Note that ‘close’ is not in that list but ‘shutdown’ is.

Then it defines a _closedsocket class that will fail all operaitons:

class _closedsocket(object):   

        __slots__ = []   
        def _dummy(*args):       
                raise error(EBADF, ‘Bad file descriptor’)   
         # All _delegate_methods must also be initialized here.   
          send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy   

          __getattr__ = _dummy

Then, it defines a class called _socketobject. Note the comment before the start of this class:

# Wrapper around platform socket objects. This implements

# a platform-independent dup() functionality. The

# implementation currently relies on reference counting

# to close the underlying socket object.

class _socketobject(object):    

       __doc__ = _realsocket.__doc__    
       __slots__ = ["_sock", "__weakref__"] + list(_delegate_methods)    
       def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None):       
                if _sock is None:           
                      _sock = _realsocket(family, type, proto)       
                self._sock = _sock       
                for method in _delegate_methods:           
                        setattr(self, method, getattr(_sock, method)) 

Within the _socketobject class, it defines a method for each of the functions exposed by the _socket class:

             _s = (“def %s(self, *args): return self._sock.%s(*args)\n\n”         

            “%s.__doc__ = _realsocket.%s.__doc__\n”)   
             for _m in _socketmethods:       
                      exec _s % (_m, _m, _m, _m)   
             del _m, _s 

Remember that close was not in the _socketmethods list. Here’s the close method of the _socketobject class:

             def close(self):       

                     self._sock = _closedsocket()       
                     dummy = self._sock._dummy       
                     for method in _delegate_methods:           
                             setattr(self, method, dummy)   
              close.__doc__ = _realsocket.close.__doc__ 

Assigning a new value to self._sock de-references the real socket object, allowing it to be garbage collected. It then exposes the _socketobject class as socket:

socket = SocketType = _socketobject

Client networking code using the Python socket library would create a socket like so:

import socket

sock = socket.socket() 

which will create a socket._socketobject instance that has a _sock member which is a _socket.socket instance. After connecting the socket to a remote endpoint, calling close() just assigns the _sock member to an instance of _closesocket, allowing Python to eventually garbage collect the original _socket.socket object. After the previous code, the following will “leak” the OS socket for some arbitrary amount of time:

sock.connect((“www.foobar.com“, 80))

sock.close() 

Looking at the C code in _socketmodule.c, there is a close() method defined on the _socket.socket class but there is no code in socket.py that will call it:

static PyObject *

sock_close(PySocketSockObject *s)
{
SOCKET_T fd; 
if ((fd = s->sock_fd) != -1) {
s->sock_fd = -1;
Py_BEGIN_ALLOW_THREADS
(void) SOCKETCLOSE(fd);
Py_END_ALLOW_THREADS
}
Py_INCREF(Py_None);
return Py_None;
}  

Actually, there is code in socket.py that will call this function but it’s in the socket._fileobject class that is returned from the socket._socketobject makefile() function which typical networking code will likely not use.

If the close() method of socket._socketobject deferred to the _socket.socket close() method, calling close() on a socket.socket object would actually close the socket (if it was open). Since it does not, calling shutdown() is the only way to close the socket immediately. I’ll not go into the difference between close() and shutdown() here since it’s fairly complex and mostly unimportant but a simplified explanation is that shutdown() may wait until any unsent data has been flushed to the network (although it may not) and close() will not.

Here’s the implementation of _socket.socket shutdown() function:

static PyObject *

sock_shutdown(PySocketSockObject *s, PyObject *arg)
{
int how;
int res; 
how = PyInt_AsLong(arg);
if (how == -1 && PyErr_Occurred())
return NULL;
Py_BEGIN_ALLOW_THREADS
res = shutdown(s->sock_fd, how);
Py_END_ALLOW_THREADS
if (res < 0)
return s->errorhandler();
Py_INCREF(Py_None);
return Py_None;
}  

Calling shutdown() on an instance of socket._socketobject will actually shutdown the OS level socket before it is garbage collected. If the _socket.socket object is not already shutdown or closed, it will eventually be closed when it’s garbage collected:

 

static void

sock_dealloc(PySocketSockObject *s)
{
if (s->sock_fd != -1)
(void) SOCKETCLOSE(s->sock_fd);
Py_TYPE(s)->tp_free((PyObject *)s);
} 

The conclusion is that Python sockets should always be closed by first calling shutdown() and then calling close().

Software engineering and product development best practices at IMVU

Follow

Get every new post delivered to your Inbox.

Join 59 other followers

%d bloggers like this: