Open Source and the Workplace

Posted in Uncategorized on March 6, 2015 by Noah

Today, just a small post.

You should work someplace where you can write and publish open source software.

Not everything has to be open source, but something should be.

You should do it to give back.

You should do it to build your own reputation.

You should do it to be sure your workplace can do it — some say they want to, but in practice they can’t.

And perhaps most of all, because when a company asks you, “hey, what’s your GitHub profile?”, the correct answer includes, “… and also, where’s yours?

Advertisement

Git Merge Without Fast Forward

Posted in Uncategorized on February 27, 2015 by Noah

If you’re doing work in a branch and you want the branch marked as such, it’s a great idea to merge with no fast forward.

Normally, if you merge a branch into your git master branch, you can lose a record of the branch — if the branch is entirely after the head of master, its commits will just be folded in with no record of the branch having ever existed.

You can fix this problem by using “git merge –no-ff”, meaning “no fast forward.” That way you’re guaranteed to get a merge commit, which gives you a place to write a message, which lets you say something like “merged from noahs-zazzy-new-feature-branch”.

It’s a common thing to want to do.

x264 Rate Control (part one)

Posted in Uncategorized on February 20, 2015 by Noah

Overview of video encoder’s rate control

(For more on video encoding, see Is H265 Such An Enabling Technology for Cloud Gaming?)

A rate controller in a video encoder adjusts the quantization steps of macroblocks in video frames; as a result, the encoded video bitstream is maintained at a constant video quality, or more often at a constant bit rate (CBR) with constraints on frame size fluctuation.

Quantization is the process of mapping a large set of input values to a countable smaller set, usually by rounding values to some unit of precision.  The diagram below shows how analog signal samples are quantized to digital values.  Larger quantization step sizes cause bigger loss of signal fidelity; on the other hand, use less number of bits to represent the quantized results.  Therefore, the trade-off between video quality and bit rate cost can be realized by appropriate quantization steps being selected dynamically.

quantization

(Click any image for larger version)

Compressed video frames even in a CBR bit stream have different sizes (e.g., I-frame bigger than P-frame, P-frame bigger than B-frame, etc.).  The video bit rate is measured as the average rate of the bit stream over a period of time.  In order to constrain the maximum range of frame size fluctuation, there’s an important concept of VBV (video buffer verifier).  In the VBV model, the compressed video bit stream fills the buffer at a constant speed, i.e., video at a constant bit rate (see picture below).  On the other hand, complete compressed video frames are expected to be available in the buffer at regular time intervals, i.e., video can be played out at a constant frame rate.  The size of the VBV buffer determines the range of frame size fluctuation.  It is also the amount of memory that a decoder must have for buffering its incoming compressed bit stream, in order to achieve smooth playout.  The VBV buffer size directly affects the overall encoding/decoding latency: the bigger the size, the higher the latency.

02_VBV

x264’s rate control algorithm and implementation

x264’s rate control logic is contained in ratecontrol.h/c.  The control is performed at two levels (see the flowchart below):

  • frame level quant (quantization step) adaptation;
  • MB level quant adaption.

In this article, we only talk about the frame-level quant adaptation for real-time low-latency encoding (no 2nd pass, look ahead).  The MB-level control (per-row VBV level tracking, mb tree, adaptive quant) will be discussed in a separate blog article.  In the following sections, we will mainly explain two functions – x264_rate_control_start() and x264_rate_control_end().

03_block_diagram

x264_rate_control_start()

x264_rate_control_start() is called before the first macroblock of a frame is encoded.  It performs necessary computation to derive the quant for the frame to be encoded. It has 2 major subroutines:

  • update_vbv_plan() calculates the target VBV buffer level after the frame is encoded;
  • rate_estimate_qscale() calculates the quant of the frame, based on its SATD and the historical frame size/complexity stats.  If VBV constrains are specified, the VBV level is predicted based on the previous SATD/frame size/quant stats and quant is further adjusted according to the predicted VBV level.

The get_qscale() inside rate_estimate_qscale() computes the quant using the following equation:

04_equation1

where

04_equation2

Here the (-1) means the previous frame, (-2) means the frame before the previous frame, and so on.

The blurred_complexity is the SATD (sum of absolute transformed differences) of this and past frames.  SATD represents the amount of residual information, and can be regarded as linear to the encoded frame size. wanted_bits_windows and cplxr_sum are history frame-size and complexity stats, collected in x264_rate_control_end().

x264_rate_control_end()

x264_rate_control_end() is called after the last macroblock of a frame has been encoded.  It collects stats for next frame’s x264_rate_control_start().  Similarly, it also performs 2 tasks:

  • calculate wanted_bits_window and cplxr_sum;
  • update_vbv() updates the VBV level with the actual encoded frame size.

04_equation3

04_equation4

where 0.x is the decay constant.

So, wanted_bits_window is the low-pass-filtered nominal frame size, while cplxr_sum is the low-passed-filtered ratio between actual_frame_size times qscale and blurred_complexity to the power (1 – qcompress).

Finally, if we plug the equation 2,3,4 into 1, we can present the formula of frame-level quant as:

04_equation5

The quantization step is derived from the previous stats of actual frame size and quant step: if already in the stable scene and the the desired frame size being (bit rate / frame rate), the second part of the equation can give us the right quantization step for the next frame.  However, as every frame’s SATD is always different from the average, we scale the quant according to SATD: the bigger the SATD (picture is more difficult to encode), the higher the quant (so we won’t waste too many bits on the frame).

In the next article, we will explain how x264’s rate control meets the VBV constraints, i.e., how clip_qscale() at the frame level and x264_ratecontrol_mb_qp() at the macroblock level work.

CoffeeScript Subtleties: Hashes / Objects

Posted in Front End on February 6, 2015 by Noah

CoffeeScript has a fun syntax for declaring hash tables (aka JavaScript objects). It looks like this:

myhash =
  field1: "foo"
  field2: "blah"
  field3: 7

In fact, it has several different syntaxes for that:

myhash = { field1: "foo", field2: "blah" }
myhash = field1: "foo", field2: "blah"

You can’t declare the first field on one line and then continue multiline, though:

# Doesn't work
myhash = field1: "foo"
  field2: 7

The big thing is to watch out when using comprehensions:

# A list of hashes, each with three fields
{ field1: "foo", field2: i, field3: 7 } for i in [1, 2, 3]

BUT! This is different:

# A hash with three fields, the last of which is a list
field1: "foo", field2: i, field3: 7 for i in [1, 2, 3]

So watch your order of operations with list comprehensions, and don’t be afraid to use curly braces (as above) or parens:

# A list of hashes, each with three fields
( field1: "foo", field2: i, field3: 7 ) for i in [1, 2, 3]

Using tsql with FreeTDS to query MS SQL Server?

Posted in Uncategorized on January 23, 2015 by Noah

The tsql man page and user guide are wonderful about telling you all the ways it can fail. Yay!

They simply do not bother to tell you how to get it to do a SQL query. Boo!

Here’s the (very simple) secret:

tsql query

Type your query. Hit enter. Type “GO” (or “go” or “gO” or whatever) and hit enter.

Done.

Now you can see if your problem is, like mine, somewhere in tiny_tds or freetds or activerecord-sqlserver-adapter, or… And you don’t even have to read through tsql.c like I did.

And if you’re getting an argument error after the new year in 2015 in tiny_tds on Linux (only), it’s something wacky about how your datetime timestamps are getting parsed. It seems to be a problem in tiny_tds or ActiveRecord’s sqlserver adapter, not in FreeTDS — tsql reads the entries just fine, and that’s raw calls to FreeTDS.

Also, a query only reveals the problem if you actually return one or more timestamps, not if you get no rows returned. Now you know!

knowing_is_half

Rotate a Shaded D3 Globe with Moving Highlight

Posted in Front End on January 16, 2015 by Noah

D3 has some great world maps and globes. For instance click and drag on this one.

However, its extremely flexible projection model can make it hard to do tricks that might be easy in other frameworks. There’s no easy way to get a 3D-shaded spherical globe out there. Well, okay, maybe with WebGL, but that’s its own can of worms.

Luckily, you can cheat! (Link to code later in the post)

world_rotate

What Are We Working With?

For D3 world maps with TopoJSON, your choices are Canvas and SVG. Both work great. We’ll use SVG here. The only reason that winds up mattering is a single animated transition later, which you could also do in Canvas if you wanted.

But if you don’t want a simple flat square gradient, your only other choice is a flat radial (round) gradient. That’s true in both Canvas and SVG unless you want to shade pixel by pixel. And for speed reasons, you don’t.

blue radial gradient

You know how to exactly match a 3D sphere’s shading with a flat circular gradient perfectly, on a sphere that may be weirdly rotated, right?

Yeah, me either.

Background Image?

For a moving gradient, you can’t just drop in a background picture. Darn it.

I mean, you could. But then either it wouldn’t move, which defeats the whole point, or you’d need a video which is big and slow to load and scales funny.

Also, again, that whole “may be rotated in random directions” thing is a bear.

Implementation? Cheat!

So let’s talk about the cheat to use, shall we?

First off, there will be a brightest spot, wherever on Earth is closest to the sun. You can calculate that from the time of day, though it’s a tad annoying — see the code later for details. I pick the brightest longitude correctly from the time of day and say, “you know what? I declare the brightest latitude is always 45 degrees north.” Which is pretty, but completely inaccurate.

You can check whether a projected point is clipped, which lets you tell if the brightest point is on the visible side of the globe or the other one.

I then set up a radial gradient. If we’re dark-side-showing, I always use the same fairly dark blue gradient. If we’re light-side-showing, I pick a gradient centered at the very brightest point. I also use a two-step gradient in both cases, mostly so there can be a small very bright highlight and a larger, more subtle gradient around the edges. It works decently, as you can see from the visual.

This gives us everything except the sunlight fading in and out nicely, which is yet another simple cheat.

For that, I just keep a global variable. Did we just change from dark-side-showing to light-side-showing or vice versa? If so, do a simple animation on the gradient. If not, move the gradient center, but don’t change the colors.

What’s the Code Like?

The basic globe-and-rotation is based on one of the simpler D3 examples, and works the same way.

A lot more of the code is calculating the brightest spot and the gradient. Here it is on bl.ocks.org, for the full code.

I calculate the brightest spot like this:

// Calculate brightest and darkest longitudes
var current_time = new Date() - 0;
var day_in_millis = 24 * 60 * 60 * 1000;
var half_day = day_in_millis / 2;
var offset_from_day_start = current_time % day_in_millis;
var offset_from_noon = offset_from_day_start - half_day;
// The sun moves one degree of longitude every 240 seconds
var degrees_from_noon = parseInt(offset_from_noon / 240000.0);
if(degrees_from_noon >= 180.0)
  degrees_from_noon -= 360.0;

lightest_longitude = -degrees_from_noon;

I won’t copy-and-paste the code for creating and transitioning the gradient… But mostly it’s simple. Transition the center of it always. Transition the colors only if we just changed between dark and light. Check the runnable code on bl.ocks.org for full details.

What Does That Look Like?

The animated GIF up top is a screen capture of my actual example code, or you can see it on bl.ocks.org.

Let’s see that animated gif bigger this time. Click on it for full size.

(And yet the real D3 version is still smoother.)

world_rotate

CoffeeScript Comprehensions – Where’s My For/In Loop?

Posted in Front End on January 8, 2015 by Noah

CoffeeScript is a fun JavaScript generator. I love their style of documentation. I love that it’s sort of a weird hybrid of Ruby, YAML and JavaScript but it still reads well. I love that it’s really pretty weird, like most JavaScript.

But sometimes it takes me awhile to figure out the CoffeeScript way to do something.

Like loops.

Continue reading

Is HEVC(H.265) Such an Enabling Technology for Cloud Gaming?

Posted in Uncategorized on December 19, 2014 by yueshishen

Digital video facilitates long-distance visual communication over Internet and has enabled a number of new-media services such as HD broadcast, on-demand content streaming, teleconferencing, as well as cloud gaming, and so on. H.264/AVC is the current state-of-art and the mostly widely used video compression standard. It was published back in 2003 and has enjoyed huge commercial success for more than 10 years.

In contrast to analog video, digital video exploits perceptual, spatial, temporal and statistical redundancy to achieve bit rate reduction at the cost of latency and computation. So, the trade-off among

  • low bit rate
  • high video quality
  • low latency
  • low computation cost

is indeed one of the most fundamental problems for a video engineer to understand, study, resolve and optimize, when he or she is asked to designs an encoder solution for a given application.

Let’s use live TV broadcast vs. on-demand content streaming (e.g., YouTube) as an example, to explain how different applications give rise to different encoder design objectives.

  • Latency: Live TV has an typical overall delay of 3-10 seconds, whose upper limit mainly demanded by live sport broadcast (e.g., TV can’t be too late than radio); YouTube viewers don’t care about delay as long as the video content is played out smoothly (e.g., buffering is a measure of sacrificing delay for smoothness).
  • Computation: Live TV program needs to be encoded in real time on dedicated hardware encoder, and low computation cost (which translates to operation cost) is a big design requirement for broadcast engineers; on-demand content is encoded off-line (but is streamed in real time) by software encoder running on a farm, which means computation cost is usually not an concern.
  • Low bit rate vs. high video quality: Live TV is typically broadcasted at a bit rate of the lowest acceptable video quality (in order to squeeze as many as channels into a given bandwidth); YouTube/NetFlix streams at the highest possible bit rate, the biggest reliable bandwidth between the streaming server and the client, to achieve the highest possible video quality. However, this doesn’t mean TV broadcast has lower video quality than on-demand streaming. On the contrary, live TV is often encoded by the most sophisticated ASIC-based professional encoders, which make the best video quality at any certain bit rate.

Cloud gaming again is very different from both live TV and YouTube. For cloud gaming, ultra low latency (<100ms) is the no.1 must-have that can't be compromised in any way – if the game is not responsive, it is just not playable. Also, the game content is encoded in real time and one encoder is dedicated to one client, therefore the computation/operation cost is another critical requirement to make a cloud-gaming business profitable. Although the video quality is a relative lower priority issue, it is becoming one of the major differentiators in the competition nowadays.

yueshi_05_tradeoff

Introduction of HEVC/H.265

The High Efficiency Video Coding (HEVC) standard is the latest video compression standard, a successor to the H.264/MPEG-4 AVC (Advanced Video Coding). Similar to MPEG-2 and H.264, HEVC is also developed by a joint team from the two major international standardization organizations of ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG). Therefore, HEVC is often called by another name, H.265.

The standardization has been a long process: a dozen of meetings were held since the first meeting of MPEG’s & VCEG’s Joint Collaborative Team on Video Coding (JCT-VC) in April 2010. Finally the standard was approved and published by both ISO/IEC and ITU-T in 2013[1][2]. In January 2014, the MPEG LA announced a HEVC patent portfolio license that is currently supported by 25 patent holders. However, quite a number of prominent H.264 licensors are still missing from the HEVC list, including Panasonic, Sony, Dolby Laboratories, Mitsubishi, Toshiba, Sharp, and Samsung. This means codec vendors (or cloud-gaming service providers who plan to develop their own HEVC implementations) might have to enter complicated multiple HEVC licensing agreements.

The HEVC is designed to double the coding efficiency compared to H.264 and significantly increase the usage of parallel processing architecture. According to the JCT-VC’s published evaluation result[5], HEVC can achieve an average bit rate reduction of around 35% at equal objective video quality (measured by Peak Signal-to-Noise Ratio, PSNR), or even 50% at equal subjective video quality (measured by ITU-R Rec. BT. 500: Methodology for the Subjective Assessment of the Quality of Television Pictures). This can be a very attractive feature for broadcasters, on-demand content provider, cloud gaming companies, etc.: image only half of a bandwidth is required to achieve similar video quality, or video quality becomes much better at the same bandwidth. However, there is no such thing as a free lunch, on which we will discuss at the end of this article.

HEVC’s New High-level Syntax Feature for Low-Latency Applications

H.264 allows a picture to be divided into multiple regions consecutively in the raster scan order. Such a region is called slice and slice is self-decodeable (doesn’t reference macroblocks in other slices of the picture). Slice was designed for the robustness purpose – part of a picture is still reconstructable even if some slices of the picture are lost in transmission. However, encoding a frame in multiple slices is accompanied with bit cost overhead: each slice has its own slice header and restarts the CABAC context.

Slice Segment (a.k.a. “Dependent Slice”) is a great feature of HEVC that is particularly designed for low-latency applications. One dependent slice inherits the slice header and the CABAC context from its previous dependent slice, which make it possible to packetize/stream part of a picture without incurring bit cost penalty.

Why to be able to stream a partial picture is useful to reduce latency? Let’s assume the network channel is inelastic, then the overall latency from the first pixel enters the encoder to the last pixel received by the decoder is

encode time + wait-to-transmit time + network latency = 2 frame time + network latency

where the wait-to-transmit time is the time for the one-frame-worth bit stream fully gets into the network channel.

Now if we can transmit part of a picture, while the rest of a picture is still being encoded (the second case of the diagram below). Now the overall latency becomes

encode time (of a dependent slice) + wait-to-transmit time + network latency = 1.x frame time + network latency

where 0.x can be as low as 1/number of dependent slices in a picture (but often higher in practice). That is because the encoding time of a slice is proportionally to the number of macroblocks (so is fixed), but the wait-to-transmit time is variable depending on the number of encoded bits (usually the complexities of different regions of a picture are different). Nevertheless, the 0.x is still a serious reduction from 1.

If at 60fps, even saving 0.5 frame time (8.33ms) is very significant, based on the fact that the overall latency budget is only 100ms.

yueshi_03_latency

One thing worth noting here is the HEVC’s parallel processing consideration: tiles (CTB rectangles) and Wavefront Parall Processing (WPP) units (CTB rows). Tiles and WPP can be utilized in implementing codec on multiple-core hardware platforms, so as to shorten the encoding/decoding time. This might be important to realize real-time encoding/decoding of for example ultra HD, but is more or less transparent for cloud gaming companies who don’t implement their own HEVC codec.

HEVC’s New Techniques for Higher Compression Efficiency

The biggest selling point of HEVC is its doubling the compression ratio compared to H.264, which is very appealing for HD (1280×720 or 1920×1080) or Ultra HD (3840×2160) content. A large number of new techniques contribute to HEVC’s superior coding efficiency, and the most useful technical features of HEVC, according to my understanding, are

1. Coding Tree Block (CTB)

CTB is just like MPEG-2’s or H.264’s Macroblock (16×16), but can be 16×16, 32×32 or 64×64. A CTB contains a quadtree of smaller Coding Blocks (CB), as illustrated below. A CB can be independently divided into multiple square or rectangle Prediction Units (PU) – Intra or Inter, and a quadtree of Transform Units (TU). The maximum PU and TU sizes can be 64×64 and 32×32 respectively, which are significantly larger than H.264’s 16×16 and 8×8. Large block sizes are particularly beneficial for compressing HD or Ultra HD content.

yueshi_01_CTB_CB

2. 35 modes of Intra prediction

HEVC has total 35 modes, among which DC and Planar are similar to H.264, and the rest 33 directional modes are for supporting large PU. H.264 has only 8 directional Intra prediction modes.

yueshi_02_Intra

3. Fractional sample interpolation

Both HEVC and H.264 use quarter-pixel motion vectors. However, the fractional sample interpolation for luma samples in HEVC uses separable application of an 8-tap filter for the half-sample positions and a 7-tap filter for the quarter-sample positions. This is in contrast to the process used in H.264 which applies a two-stage interpolation process by first generating the values of one or two neighboring samples at half-sample positions using 6-tap filtering, rounding the intermediate results, and then averaging two values at integer or half-sample positions.

yueshi_03_Interpolation

4. Advanced Motion Vector Prediction (AMVP) and Merge mode

Different from H.264’s single Motion Vector Predictor (MVP), HEVC’s inter-PU has a list of MV candidates and uses the index encoded in the bitstream to select the final MVP. To construct the MV candidate list, either the AMVP or the Merge mode can be used. In AMVP, the MV candidates are derived from neighboring PUs, co-located PUs in reference pictures, as well as 0-MV. The Merge mode is very similar to H.264’s Skip or Direct (spatial and temporal), i.e., the MV candidates are inherited from neighboring PUs, co-located PUs in reference pictures, also 0-MV.

5. Sample Adaptive Offset (SAO)

After the Deblocking Filter applied on the 8×8 grid (H.264 also has DF but is on 4×4), an additional filter called Sample Adaptive Offset is applied on a per-CTB basis. For each CTB, the bitstream codes a filter type (band or edge) and four offset values. The purpose of SAO is to make the CTB more closely match the source image, based on this additional filtering whose parameters can be determined by histogram analysis at the encoder side.

Commercial Reality of Computation Cost and Bit-rate Reduction

Due to HEVC’s claimed 50% bit rate reduction, majority of broadcast technology companies have invested in HEVC for several years. In fact, many have stopped the development of H.264 and are completely focusing their R&D on HEVC.

Since 2012, I have seen more and more HEVC codec products and demos, at the National Association of Broadcasters (NAB) show – the world’s largest trade show of broadcast technologies. At the recent NAB 2014 in April, most of the HEVC encoders are still software-based and Engineering effort has been spent on building real-time Ultra-HD encoders (by utilizing GPU acceleration, multi-core parallel processing, smart searching algorithms, and so on).

However, the video quality of HEVC has not yet reached a level that the standard has promised. From my conversations with a number of vendors, the current typical HEVC bit reduction is around 25-40% for 1080p and 2160p, but more importantly at a computation cost of 5-10x as H.264’s. Such a huge computation cost requires dedicating a top-tier-CPU-equipped server to one client, which makes the server and operation cost way too high to be acceptable for end users. At such a monster computation cost of software encoder, even H.264 can deliver much better video quality as it does right now. Therefore, HEVC is not useful at all for cloud gaming services, at this current status.

Conclusion

HEVC of course has a number of interesting features and I believe it will eventually become a technology that every cloud gaming company must commit Engineering resource to experiment and prototype. Particularly when main-stream high-end games start to do Ultra HD, then HEVC is the only compression standard that can deliver bit stream at today’s home Internet speed (HEVC can encode 2160p30 at < 15mbs, but H.264 needs > 30mbs). On the other hand, HEVC still has a long way ahead to mature and at this stage is not too meaningful for the cloud gaming business. The current two major kills are the way-too-high computation cost and lack of low-power ASIC implementation. However, from our history experience for MPEG-2 and H.264, I feel practical HEVC encoding solutions will emerge and will finally dominate the cloud-gaming industry, probably in a few year time.

A Tale of Two D3s

Posted in Front End on December 12, 2014 by Noah

D3.js is a wonderful library used for all sorts of dynamic visualizations — you’ve probably seen Mike Bostock’s gorgeous New York Times graphs, for instance.

If you’ve looked into how to use it, you’ve likely read Three Little Circles and Thinking With Joins, leaving you with the impression that D3 is this fun, bizarre little functional-programming library where you bind DOM elements to data and then say what to do with the various DOM nodes.

Which, mostly, it is.

Some example visualizations from the D3js.org home page.

But there’s a second D3 library hidden inside it that works completely differently, and it can be confusing. It certainly confused me! Let’s talk about that.

TopoJSON, Geo, Streams and Paths

When you’re looking at big rotating globes, or US maps, or projections that warp everything, you’re usually not looking at “normal” D3. Instead of mapping data to DOM elements like the tutorials above, they’re showing warps of big continuous spaces. These are also the D3 examples with lots of fun projections.

Many different map projections with D3

It makes sense that that would be a weird match for normal D3. Some of them are built on Canvas, which doesn’t really use DOM nodes. What would you bind the data to? Often a single conceptual line turns into a bunch of differents lines (if you click that link, click drag the map!).

So instead, D3 has the Geo Path library, which takes a bunch of points, transforms and clips them, and outputs them (very specifically) as either SVG path language or operations on an HTML5 Canvas. It’s not nearly as general-use as “normal” D3, which can apply to any DOM objects you like. But it’s great for projecting maps as SVG or canvas, so I forgive it 😉

Of course, what it’s doing doesn’t “feel” like D3, so sometimes it’s hard to figure out even pretty simple operations if you don’t know the right question to ask. And the code can be a tad hard to understand at times…

However, once you understand that the TopoJSON and d3.geo libraries are basically unrelated to the rest of D3, you’ll stop trying to do the same things with them. They happen to be packaged together and they’re both useful for graphs and visual demos.

But other than that, you should basically treat them as independent.

And as a result, your life will be much better.

Also, because everybody likes pretty graphics, here’s some in-browser happiness from the D3 show reel as an animated gif. You’re welcome.

(Also? Go see the real thing in full SVG glory if you like it.)

From Mike Bostock's D3 Showreel

Is Your Projected Point Clipped in D3?

Posted in Front End on December 5, 2014 by Noah

D3 provides a lot of excellent map projections via streams and paths. There are even more in a separate library, in case you think Aitoff and van Der Grinten aren’t getting their fair shake in standard D3, or that the Waterman Polyhedral Projection is really underrated.

The streams even provide automatic clipping, so if you show a bunch of points, the right ones are drawn and the wrong ones aren’t. That’s awesome!

Many D3 Optional Projections

Of course, sometimes you want to know, “hey, I have this funky projection. Is this point shown?”

They’re… not perfect for that. But there’s an answer! Let’s talk about that.

Displaying a 3D World Map on a Globe

Clipping only makes sense if something is clipped. Usually that means if you have, say, a globe view, where half the world is on the far side at any given time.

So let’s look at this snippet from Mike Bostock, which displays a rotating 3D globe.

If you want to follow along, open the link above in a new window and pop open your JavaScript console (option-command-J in Chrome on a Mac). If you want to see the code, use this URL.

Continue reading