a dork.
88 stories

The elections

1 Comment

From Matt Yglesias on Twitter:

Very normal Democrats won all kinds of House races without reviving “blue dog” antics but also a bunch of reality checks for the capital-l Left in these results.

Not just a couple of House races where insurgent candidates fizzled, but the California rent control initiative the Washington “green new deal” initiative and the MD-Gov race all show limited appetite for ambitious left policy in even blue states.

Conversely, the more modest economic progressive agenda of Medicaid expansion and minimum wage increases continues to triumph even in very conservative states.

From Angus:

“over the past 21 midterm elections, the President’s party has lost an average 30 seats in the House, and an average 4 seats in the Senate” NY Times sez it’s R – 26 in House and + 2-5 in Senate. Yet they call it “A rebuke to Trump”. That’s kind of just wishful thinking.

Somehow — miraculously — democracy did not die, I am still writing blog posts for tomorrow morning, fascism has yet to arrive, and life goes on!

p.s. the youth vote was not up much.  And at least one Kremlin mole has been ousted.

The post The elections appeared first on Marginal REVOLUTION.

Read the whole story
30 days ago
yeah - the genie's outta this bottle. it's not clear that the GOP will take any great pains to distance themselves from the white nationalists. theres's a pretty clear base there. particularly when you couple that to a rejection of "political correctness".
Share this story

The Unintelligent Design of SureFire Intelligence

1 Comment

On October 30th, the far-right site Gateway Pundit published documents alleging that Robert Mueller, who is heading up the investigation into foreign interference in the 2016 U.S. presidential elections, sexually assaulted a woman in 2010. The firm that produced this “investigation” was quickly revealed to be SureFire Intelligence, which has a tiny digital footprint prior to the Mueller allegations.

Jacob Wohl, a 20-year-old conservative activist who is most well known for his reports from “hipster coffee shops in downtown LA” about how Trump is secretly popular among young liberals, tweeted about the allegations against Mueller a day before they surfaced on Gateway Pundit, which he also writes for.

DNS Ties

The “intelligence firm” that prepared the allegation, SureFire Intelligence, was linked to Wohl due to DNS registration records saved on CuteStat.com. These records show that someone using the email jacob.wohl@nexmanagement.com was involved with the domain registration for surefireintelligence.com.

Jacob Wohl previously worked at Nex Management, as is clear from their account tweeting a photograph of him with Trump, naming him as “CEO”.

A SureFire Miss

The “intelligence” firm itself seems legitimate at first glance, with over a dozen employees working there according to LinkedIn, a somewhat professional-looking website at surefireintelligence.com, a Twitter page, and a few posts on Medium [note: deleted, archived here] referencing it. However, under any actual scrutiny, all of these facades fall apart.

When searching for the various employees who list their employment as SureFire Intelligence, nearly all of them use stolen profile photographs. In particular, many of these photographs use the sepia-toned filter that was likely used to disrupt reverse image search algorithms.

Their “Tel Aviv Station Chief” uses a photograph of Israeli supermodel Bar Refaeli.

One of their “Investigators” from Boston uses a stock photograph with extra filters added on.

An LA-based “Private Investigator” at SureFire Intelligence bears a strong resemblance to Nick Hopper, a British model and photographer.

A woman who describes herself as the “Head of Government Relations” at SureFire Intelligence either does not exist, or is actually a stock photo model.

The “Deputy Director of Operations” at SureFire is also fake, unless he moonlights as a minister from Michigan.

Other “employees” at SureFire Intelligence also stole their profile pictures from others, but perhaps the most brazen is their Zurich-based “Financial Investigator” — Christoph Waltz.

Few, if any, of these profile photographs produced results when running a reverse Google Image search. However, Jacob Wohl, or whoever else created these LinkedIn profiles, was probably not aware that when they were plugged into Yandex Image Search — which is far more powerful with facial recognition — all of the LinkedIn photographs will bring back surefire results.


Update: The Medium user “Evan Goldman”, who claimed to be an Israeli analyst specializing in writing on private intelligence, used a profile picture stolen from a model named Oran Katan, as discovered by Byron Kittle. “Goldman” wrote a glowing profile on SureFire Intelligence, claiming to visit their headquarters and speaking with their analysts. He deleted his Medium profile today, but his Twitter is still active [archive] as of this update.

The post The Unintelligent Design of SureFire Intelligence appeared first on bellingcat.

Read the whole story
37 days ago
bedazzled jeans and a penchant for catalog photos. well done folks.
Share this story

Important Flatland Research

1 Comment and 2 Shares
I have long had a hard time picturing what day, night and the shape of the terminator would look like on Buckminster Fuller's Dymaxion Map. Well yesterday I wrote some code and now I know! It sort-of feels like two weird spirals turning in opposite directions. Video here.

Skip ahead about half way to see it with satellite imagery instead of flat coloring. That version is a little dark, so you'll want to full-screen it.

Anyway, Planet Flatland has a very strange sun, is what I'm saying.

This update will be in the next release of XScreenSaver but I figured I'd post the video now anyway, because it's neat.

Oh yeah, also,

"We must do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest. The youth of today are absolutely right in recognizing this nonsense of earning a living. We keep inventing jobs because of this false idea that everybody has to be employed at some kind of drudgery because, according to Malthusian-Darwinian theory, he must justify his right to exist. So we have inspectors of inspectors and people making instruments for inspectors to inspect inspectors. The true business of people should be to go back to school and think about whatever it was they were thinking about before somebody came along and told them they had to earn a living." -- R. Buckminster Fuller, 1970

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

Read the whole story
213 days ago
ha! now i have to update my xscreensaver installation.
Share this story

NetChain: Scale-free sub-RTT coordination

1 Comment and 3 Shares

NetChain: Scale-free sub-RTT coordination Jin et al., NSDI’18

NetChain won a best paper award at NSDI 2018 earlier this month. By thinking outside of the box (in this case, the box is the chassis containing the server), Jin et al. have demonstrated how to build a coordination service (think Apache ZooKeeper) with incredibly low latency and high throughput. We’re talking 9.7 microseconds for both reads and writes, with scalability on the order of tens of billions of operations per second. Similarly to KV-Direct that we looked at last year, NetChain achieves this stunning performance by moving the system implementation into the network. Whereas KV-Direct used programmable NICs though, NetChain takes advantage of programmable switches, and can be incrementally deployed in existing datacenters.

We expect a lightning fast coordination system like NetChain can open the door for designing a new generation of distributed systems beyond distributed transactions.

It’s really exciting to watch all of the performance leaps being made by moving compute and storage around (accelerators, taking advantage of storage pockets e.g. processing-in-memory, non-volatile memory, in-network processing, and so on). The sheer processing power we’ll have at our disposal as all of these become mainstream is staggering to think about.

The big idea

Coordination services (e.g. ZooKeeper, Chubby) are used to synchronise access to resources in distributed systems providing services such as configuration management, group membership, distributed locking, and barriers. Because they offer strong consistency guarantees, their usage can become a bottleneck. Today’s server based solutions require multiple RTTs to process a query. Client’s send requests to coordination servers, which execute a consensus protocol (e.g. Paxos), and then send a reply back to the client. The lower bound is one RTT (as achieved by e.g. NOPaxos).

Suppose for a moment we could distributed coordination service state among the network switches instead of using servers, and that we could run the consensus protocol among those switches. Switches process packets pretty damn fast, meaning that the query latency can come down to less than one RTT!

We stress that NetChain is not intended to provide a new theoretical answer to the consensus problem, but rather to provide a systems solution to the problem. Sub-RTT implies that NetChain is able to provide coordination within the network, and thus reduces the query latency to as little as half of an RTT. Clients only experience processing delays caused by their own software stack plus a relatively small network delay. Additionally, as merchant switch ASICs can process several billion packets per second (bpps), NetChain achieves orders of magnitude higher throughput, and scales out by partitioning data across multiple switches…

Modern programmable switch ASICs provide on-chip storage for user-defined data that can be read and modified for each packet at line rate. Commodity switches have tens of megabytes of on-chip SRAM. Datacenter networks don’t use of all this, and so a large proportion can be allocated to NetChain. A datacenter with 100 switches, allocating 10MB per switch, can store 1GB in total, or 333MB effective storage with a replication factor of three. That’s enough for around 10 million concurrent locks for example. With distributed transactions that take 100 µs, NetChain could provide 100 billion locks per second – which should be enough for a while at least! Even just three switches would accommodate 0.3 million concurrent locks, or about 3 billion locks per second. There’s a limitation on individual value sizes of about 192 bytes though for full speed.

We suggest that NetChain is best suited for small values that need frequent access, such as configuration parameters, barriers, and locks.

Consistent hashing is used to partition the key-value store over multiple switches.

In-Network processing + Chain replication = NetChain

So we’ve convinced ourselves that there’s enough in-switch storage to potentially build an interesting coordination system using programmable switches. But how do we achieve strong consistency and fault-tolerance?

Vertical Paxos divides the consensus protocol into two parts; a steady state protocol, and a reconfiguration protocol. These two parts can be naturally mapped to the network data and control planes respectively. Both read and write requests are therefore processed directly in the switch data plane without controller involvement. The controller handles system reconfigurations such as switch failures, and doesn’t need to be as fast because these are comparatively rare events.

For the steady state protocol, NetChain using a variant of chain replication. Switches are organised in a chain structure with read queries handled by the tail and write queries sent to the head, processed by each node along the chain, and replied to at the tail.

Queries are routed according to the chain structure, building on top of existing underlay routing protocols. Each switch is given an IP address, and an IP list of chain nodes is stored in the packet header. The destination node in the IP header indicates the next chain node. When a switch receives a packet and the destination IP matches its own address, it decodes the query and performs the read or write operation. Then it updates the destination node to the next chain node, or to to the client IP if it is the tail.

Write queries store chain IP lists as the chain order from head to tail; read queries use the reverse order (switch IPs other than the tail are used for failure handling…). The chain IP lists are encoded to UDP payloads by NetChain agents. As we use consistent hashing, a NetChain agent only needs to store a small amount of data to maintain the mapping from keys to switch chains.

Because UDP packets can arrive out of order, NeChain introduces its own sequence numbers to serialize write queries.

The end solution offers per-key read and write queries, rather than per-object. NetChain does not support multi-key transactions.

Handling failures and configuration changes in the control plane

The NetChain controller runs as a component in the network controller. Switch failures are handled in two stages. Fast failover quickly reconfigures the network to resume serving queries with the remaining nodes in each affected chain. This degraded mode can now tolerate one less failure than the original of course. Failure recovery then adds other switches as new replication nodes for the affected chains, restoring their full fault tolerance.

Fast failover is pretty simple. You just need to modify the ‘next’ pointer of the node before the failed one to skip that node:

This is implemented with a rule in the neighbour switches of the failed switch, which checks the destination IP. If it is the IP of a failed switch, then the destination IP is replaced with the next chain hop after the failed switch, or the client IP if we’re at the tail.

For failure recovery, imagine a failed switch mapped to k virtual nodes. These are randomly assigned to k live switches, helping to spread the load of failure recovery. Since each virtual node belongs to f+1 chains, it nows needs to be patched into each of them, which is again done by adjusting chain pointers. In the following figure, fast failover has added the blue line from N to S2, and then failure recovery patches in the orange lines to and from S3.

Before splicing in the new node, the state is first copied to it. This can be time-consuming, but availability is not affected. The switch to add the new node is done using a two-phase atomic protocol once the state is in place. To further minimise service disruptions, switches are mapped to multiple virtual groups (e.g. 100), with each group available for 99% of recovery time and only queries to one group at a time affected (paused) by the switchover protocol.

Incremental adoption and hybrid deployments

NetChain is compatible with existing routing protocols and network services and therefore can be incrementally deployed. It only needs to be deployed on a few switches initially to be effective, and then its throughput and storage capacity can be expanded by adding more switches.

NetChain offers lower level services (no multi-key transactions) and reduced per-key storage compared to full-blown coordination services. For some use cases this won’t be a problem (e.g., managing large numbers of distributed locks). In other cases, you could use NetChain as an accelerator to server-based solutions such as Chubby or ZooKeeper, with NetChain used to store hot keys with small value sizes, while traditional servers store big and less popular data .


The testbed consists of four 6.5 Tbps Barefoot Tofino switches and four 16-core server machines with 128GB memory. NetChain is compared against Apache Zookeeper.

The comparison is slightly unfair; NetChain does not provide all features of ZooKeeper, and ZooKeeper is a production-quality system that compromises its performance for many software-engineering objectives. But at a high level, the comparison uses ZooKeeper as a reference for server-based solutions to demonstrate the performance advantages of NetChain.

In all the charts that follow, pay attention to the breaks in the y-axis and/or the use of log scales.

NetChain provides orders of magnitude higher throughput than ZooKeeper, and neither system is affected by value size (in the 0-128 byte range at least) or store size:

As the write ratio goes up, NetChain keeps on delivering maximum throughput, whereas ZooKeeper’s performance starts to drop off. (At 100% write ratio, ZooKeeper is doing 27 KQPS, while NetChain is still delivering 82 MQPS – each test server can send and receive queries at up to 20.5 MQPS, and there are four of them).

NetChain is also more tolerant of packet loss:

NetChain has the same latency for both reads and writes, at 9.7 µs query latency, and this stays constant even when all four servers are generating queries at their maximum rate. The system will saturate at around 2 BQPS. ZooKeeper meanwhile has 170 µs read latency and 2350 µs write latency at low throughput, the system saturates at 27 KQPS for writes and 230 KQPS for reads.

As you add more switches to NetChain, throughput grows linearly.

The following figure shows that using virtual groups successfully mitigates most of the performance impact during failure recovery:

And finally, here’s a TPC-C new order transaction workload which allowing testing transactions under different contention levels. By using NetChain as a lock server, the system can achieve orders of magnitude higher transaction throughput than ZooKeeper.

We believe NetChain exemplifies a new generation of ultra-low latency systems enabled by programmable networks.

Read the whole story
226 days ago
file under: P4 tricks
224 days ago
Share this story

Welcome to the Bridge Club

1 Comment

Sometimes less is more – This is a phrase that gets thrown around a lot, maybe too much. It can definitely ring true when it comes to backpacking and other outdoor pursuits though. When gear is selected for any outing, it’s important to consider which pieces are necessary to carry along, or maybe there is something else that can serve more than one purpose. I suppose a better word might be streamlined.

Welcome the newest member of our touring family, the Bridge Club. The name of the game with the Bridge Club is streamlined simplicity, designed for those tours that traverse both on-road or off-road surfaces.

When it comes to our touring line, the Pugsley is the ride you need when float and traction are critical. The Troll and Ogre are great for off-road touring and carrying BIG loads, but your route may be more pavement heavy or you may never want to run rim brakes. The ECR is the ultimate off-road touring rig, but not everyone is looking to tour across Mongolia. Our trucker line is the go-to for long distance road touring. But what if you want to throw caution to the wind and let your route, plans and terrain be chosen on a whim? On-road? Off-road? Who gives a shit. The Bridge Club will help you bridge the gap…See what I did there?

Bridge Club Geometry

We’ve been making off-road touring bikes for a while now, and over the years we have arrived at a pretty good geometry recipe when it comes to touring on dirt. The Bridge Club uses all of this experience and doesn’t stray far from the Troll/Ogre in ride geometry or fit geometry.

The first big difference is that the Bridge Club is designed around 27.5 x 2.4” (584 BSD) wheels/tires. We designed the Bridge Club to be a good off-road touring rig but kept in mind that someone may want to throw 700c wheels/tires and panniers on and knock out a classic road tour, this is totally possible with the Bridge Club.

The bottom bracket height reflects this at around 295mm/11.6” with the stock tire. This is about 13mm lower than the Ogre and 10mm lower than the Troll. This BB height will be adequate for most off-road situations you find yourself in but won’t be too high if you decide to do some on-road touring. The headtube angles, seat tube angles and ETT closely resemble that of the Troll with a few exceptions. The XS Bridge Club has a 1 degree slacker headtube angle than the Troll to account for toe clearance with a larger OD tire and ETT dimensions vary slightly across sizes.

Rear Spacing

I know what’s coming – “Fuck the bike industry and all of its ever-evolving standards, blah blah blah.” I hear your pain, I really do. The Bridge Club isn’t using a new standard but it is a little more obscure. The bike is designed around 141mm Boost QR hubs to allow for chain clearance and larger volume rubber. The rear spacing is Gnot Boost QR, in that the frame is designed at 138mm. This allows the use of 141mm Boost QR hubs or standard 135mm QR hubs. The frame will flex in or out 1.5mm per side to accommodate either hub width. It’s the same Gnot Boost idea but based around QR axles rather than thru axles.

Bridge Club Features

Many years of designing and testing touring bikes have led us to include feature sets that account for just about anything you may want to attach to your bike. For an ultimate off-road touring rig, like our ECR, numerous three pack mounts, dedicated Rohloff slots, horizontal dropouts, trailer mounts and cast yokes allow for nearly infinite options when it comes to customization. For someone who is just getting into touring or bikepacking that can be a lot to wrap your head around. Or maybe you have been bikepacking and touring for years and you know exactly what you want in a bike. After all, the Swiss Army Knife approach to features may be more than you want. The Bridge Club simplifies those features to the necessities that will get your there and back.

We designed a simpler plate style dropout that still has the Surly aesthetic, but without all of the complexity of our Troll dropout. The dropout features a vertical slot for QR wheels, standard IS brake adapter capability, and mounts for racks and fenders. The ability to run a Rohloff Speedhub wasn’t forgotten, but isn’t as prominent in the Bridge Club as it is in the Troll dropout. The upper rack/fender boss can be used with an OEM2 axle plate and the Rohloff M5 adapter. A chain tensioner is also necessary for this application. The frame will need to be compressed 1.5mm per side to run a 135mm Rohloff hub (similar to our Gnot Boost frames) and is not compatible with the Rohloff A12 hubs.

The frame has triple bottle mounts on the top and bottom of the downtube, and a seat tube water bottle mount on the SM-XL frames. Triple guides on the top tube and single guides elsewhere take care of your cable wrangling needs. There are seatstay mounted barrel bosses for your rack mounting needs and a fender mount on the seatstay bridge.

The fork features upper and lower barrel bosses, one three pack mount on each leg, midblade eyelets, and rack mounts on the fork ends.

Tire Clearance

The Bridge Club was designed around a 27.5 x 2.4” tire, but in the spirit of Fatties Fit Fine we didn’t stop there. The frame and fork have clearance for up to 27.5 x 2.8” and 700 x 47c tires.

Individual tire and rim combos may affect tire clearance and will change bottom bracket height.

Check out the Bridge Club bike page for full spec details, however, highlights include:

SRAM X5 front derailleur, GX 10 speed rear derailleur, Tubeless ready WTB i29 rims and 2.4” Riddler tires, 30.0 mm Surly stainless seat collar, and a comfortable 17 degree swept back bar.

Choices are a wonderful burden sometimes, just ask me where we should go for brunch. I know a million places but I’ll waffle for hours trying to figure out the perfect spot. See what I did there again? Dad jokes aside, sometimes simplicity is just what a person needs. The Bridge Club does just that. Where other models in our line provide that wonderful burden, the Bridge Club provides enough options to outfit your bike for that next on-road or off-road tour without the extra decisions or stress. When it’s all said and done you may even have a little extra cash to grab that frame bag, rack or seat bag and start the long ride to touring glory.

If you’re pumped up by the ramble you just read and want to check out a Bridge Club in person, the following shops have pre-ordered bikes, which are in stores now, or arriving in the coming weeks. As always, international and intergalactic availability and pricing will vary depending on your current whereabouts.

Bicycle Habitat, New York, NY
The Hub Bicycle Co-op, Minneapolis MN
Bike Touring News, Boise ID
The Bike Rack, Washington DC
Thick Bikes, Pittsburgh PA
Halcyon Bike Shop, Nashville TN
Metropolis Cycle Repair, Portland OR
Angry Catfish Bicycle Shop, Minneapolis MN
Michael’s Cycles, Prior Lake, MN
Loose Nuts Cycles, Atlanta GA
Pedal LLC, Littleton CO
City Bike Tampa, Tampa FL
Gladys Bikes, Portland OR
Lee’s Cyclery & Fitness, Fort Collins CO
Bicycle Business, Sacramento CA
YAWP! Cyclery, Edgewater CO
Ponderosa Cyclery, Omaha NE
Blue Dog Bicycles, Tucson AZ

Read the whole story
252 days ago
that sucking sound you hear is coming from my wallet.
Share this story

The RSS Revival


The platformization of the web has claimed many victims, RSS readers included. Google Reader's 2013 demise was a major blow; the company offed it in favor of "products to address each user's interest with the right information at the right time via the most appropriate means," as it Google executive Richard Gingras put it at the time. In other words, letting Google Now decide what you want. And the popular Digg Reader, which was born in response to that shuttering, closed its doors this week after a nearly four-year run.

Despite those setbacks, though, RSS has persisted. "I can't really explain it, I would have thought given all the abuse it's taken over the years that it would be stumbling a lot worse," says programmer Dave Winer, who helped create RSS.

I enjoyed this story on the state of RSS by Wired's Brian Barrett because it resonates with a trend I've also noticed in the past couple of years. Many of us have often praised social networks as "winners" in the battle against pure old RSS feeds, but the reality is that RSS is here to say. Perhaps, like rock and roll, RSS can never truly die.

What's even more interesting is that, beyond RSS as a protocol, RSS services and clients (web backends and apps) are improving and growing more powerful on a weekly basis now. Barrett mentioned Feedly, The Old Reader, and Inoreader (which I've been using since 2016 and offers terrific power user features); I would also add NewsBlur and Feedbin – two services that have relentlessly iterated on the RSS experience since Google Reader's demise. Just in the past few months, for instance, NewsBlur launched infrequent site stories to fix the very problem of subscribing to too many feeds, and Feedbin rolled out support for Twitter subscriptions. Both are genuine innovations that help people who want to get their news directly from the sources they choose. And if we look at the iOS side of this, apps like Fiery Feeds and lire are rethinking what advanced RSS readers for iPhone and iPad should be capable of. We wanted to do an RSS-focused episode of AppStories, and we ended up producing two of them (you can listen here and here) because there was just so much to talk about.

While millions of people may be happy getting their news from Facebook or an aggregator like Apple News (which I also use, occasionally, for more mainstream headlines), the resiliency of RSS makes me happy. There was a time when I thought all my news could come from social feeds and timelines; today, I'm more comfortable knowing that I – not a questionable and morally corrupt algorithm – fully control hundreds of sources I read each day.

→ Source: wired.com

Read the whole story
252 days ago
i'll never forgive google for killing reader. newsblur does a great job though.
Share this story
1 public comment
253 days ago
Have to leave a plug for Newsblur. Easily the best subscription service I have other than the monthly internet bill.
Next Page of Stories