the actor who plays homelander really manages to make every act of drinking milk uniquely grotesque and menacing #theboys
implementing some of the concepts from https://www.fourmilab.ch/documents/specrend/ in WebGL and js. Here's a rendered approximation of CIE XYZ colour. It's a little off right now, and the effect is too much smearing of the center white to the edges.
I'll try to write up more in a blog post tomorrow, and hopefully publish the code.
the internet is a cruel place, especially for attackers #inktober2019 bait
@firstname.lastname@example.org giving a talk on taking on new tech
- react, elm, Haskell: new, nonstandard tech
- all risks: no ecosystem, learning curve, hiring
- controlled experiment: low-risk project, get it into prod, expand or back out
- react rewards: less bugs, maintainable code, grow as devs
- elm rewards: move quickly w/o breaking, reliable front-end, grow as dev
- Haskell rewards: easier to maintain complex biz logic, less runtime errs
#PhillyETE keynote by Jessica Kerr. I learned a new word: symmathesize.
Catch me at #PhillyETE
Using reported gate/qubit error rates to improve runtime accuracy
- not just limiting circuit depth, but prioritizing qubits and connections based on measured error
- scaffold to IR, then uses smt and error data to optimize final openQasm output for given day
- smt scales to 72qbits, hoping heuristics based will scales to 1000s
- improvement over qiskit
- T1 is modelled as hard cutoff, not exp decay
Some more slides
Oh crap he's back.
That duck must have been underwater for *minutes*.
Splitting CNN for better efficiency
- split CNNs into tiles across GPUs, less memory on/offloading
- costs in quality of model, changes semantics
- mitigate with stochastic splitting, regularizes data
- some scheduling to see where cost of splitting vs time of offloading
"Split-CNN: Splitting Window-based Operations in Convolutional Neural Networks for Memory System Optimization"
BitTactical : combined approaches to improve NN computation by skipping zeros
- "fill in" weights of zero with non-zero weight/activation muls
- previous efforts searched anywhere in pending muls, restricting depth/breadth of search improves perf
Backend, not as clear on this:
- bit serial comp allows to focus on specific bits, precise mul
- smaller granularity 8 vs 16 bit results in less mul due to sparse bits
Mega-microfluidics: speculative WACI talk about scaling up micro-fluidics
- parallelizing boards
- routing and data-movement is extremely expensive in terms of time, have to pipe from board to board
- cross-contamination and isolation of fluidic experiments
Some slides, the talk was funny and insightful
I'm a fan of Genetic Programming, these guys applied it to GPGPU programs and it actually worked
CORF: using compiler, µ-arch, and graph-processing to improve perf by reducing register use on gpu's
- reduce register reads by packing multiple logical registers into 4byte physical reg
- big problem is what reg's can/should be packed
- collect some info at compile time, used during exec
- modified 2-coloring alg to maximize packing