London DevOps London based DevOps Fri, 19 Aug 2011 19:54:24 +0000 http://wordpress.org/?v=2.9.1 en hourly 1 On Her Majesty’s Digital Service http://morethanseven.net/2011/08/19/On-her-majestys-digital-service.html http://morethanseven.net/2011/08/19/On-her-majestys-digital-service.html#comments Thu, 18 Aug 2011 23:00:00 +0000 Gareth Rushgrove tag:morethanseven.net,2011-08-19:/2011/08/19/On-her-majestys-digital-service.html This blog post is mainly an excuse to use the pun in the title. It’s also an opportunity to tell folks that don’t already know I’ll be starting a new job on Monday working for the UK Government. I’m going to be work for the Government Digital Service, a new department tasked with a pretty wide range of sorting the Government out online.

The opportunity is huge. And when it came around I couldn’t turn it down. I’m going to be working with a bunch of people I’ve known and respected for a while, as well as other equally smart people. That means I’m going to be back in London again as well.

Hopefully I’ll be able to talk lots about what we’re up to. The groundwork for that has already been laid by the alpha.gov team who have been blogging furiously about topics of interest.

]]>
http://morethanseven.net/2011/08/19/On-her-majestys-digital-service.html/feed/ 0
Social networks for servers http://www.devco.net/archives/2011/08/14/social-networks-for-servers.php http://www.devco.net/archives/2011/08/14/social-networks-for-servers.php#comments Sun, 14 Aug 2011 11:48:35 +0000 R.I. Pienaar http://www.devco.net/?p=2226 A while ago Techcrunch profiled a company called Nodeable who closed 2 mil funding. They bill themselves as a social network for servers and have some cartoon and a beta invite box on their site but no actual usable information. I signed up but never heard from them. So I’ve not seen what they’re doing at all.
Either way I thought the idea sucked.

Since then I kept coming back to it thinking maybe it’s not bad at all, I’ve seen many companies try to include the rest of the business into the status of their networks with big graph boards and complex alerting that is perhaps not suited to the audience.

These experiments often fail and cause more confusion than clarity as the underlying systems are not designed to be friendly to business people. I had a quick twitter convo with @patrickdebois too and a few people on ##infra-talk were keen on the idea. It’s not really a surprise that a lot of us want to make the events stream of our systems more accessible to the business and other interested parties.

So I setup a copy of status.net – actually I used the excellent appliance from Turnkey Linux and it took 10 minutes. I gave each of my machines an account with the username being their MAC address and hooked into my existing event stream, it was all less than 150 lines of code and the result is quite pleasing.

What makes this compelling is specifically that it is void of technical details, no mention of /dev/sda1 and byte counts and percentages that makes text hard to scan or understand by non tech people. Just simple things like Experiencing high load #warning This is something normal people can easily digest. It’s small enough to scan really quickly and for many users this is all they need to know.

At the moment I have Puppet changes, IDS events and Nagios events showing up on a twitter like timeline for all my machines. I hash tag the tweets using things like #security, #puppet, and #fail for failing puppet resources. #critical, #warning, #ok for nagios etc. I plan on also adding hash tags matching machine roles as captured in my CM. Click on the image to the right for a bigger example.

Status.net is unfortunately not the tool to build this on, it’s simply too buggy and too limited. You can make groups and add machines to groups but this isn’t something like Twitters lists thats user managed, I can see a case where a webmaster will just add the machines he knows his apps runs on in a list and follow that. You can’t easily do this with status.net. My machines has their fqdn as real names, why on earth status.net doesn’t show real names in the timeline I don’t get, I hope it’s a setting I missed. I might look towards something like Yammer for this or if Nodable eventually ships something that might do.

I think the idea has a lot of merit. If I think about the 500 people I follow on twitter, its hard work but not at all unmanageable and you would hope those 500 people are more chatty than a well managed set of servers. The tools we already use like lists, selective following, hashtags and clients for mobiles, desktop, email notifications and RSS all apply to this use case.

Imagine your servers profile information contained a short description of function. The contact email address is the team responsible for it. The geo information is datacenter coordinates. You could identify ‘hot spots’ in your infrastructure by just looking at tweets on a map. Just like we do with tweets for people.

I think the idea has legs, status.net is a disappointment. I am quite keen to see what Nodeable comes out with and I will keep playing with this idea.

]]>
http://www.devco.net/archives/2011/08/14/social-networks-for-servers.php/feed/ 0
Talking Configuration Management, Vagrant And Chef At Lrug http://morethanseven.net/2011/08/11/Talking-configuration-management-vagrant-and-chef-at-lrug.html http://morethanseven.net/2011/08/11/Talking-configuration-management-vagrant-and-chef-at-lrug.html#comments Wed, 10 Aug 2011 23:00:00 +0000 Gareth Rushgrove tag:morethanseven.net,2011-08-11:/2011/08/11/Talking-configuration-management-vagrant-and-chef-at-lrug.html I stepped in at the last minute to do a talk at the last London Ruby User Group. From the feedback afterwards folks seemed to enyoy it and I certainly had fun. Thanks to everyone who came along.

As well as the slides the nice Skills Matter folks have already uploaded the videos from the night.

]]>
http://morethanseven.net/2011/08/11/Talking-configuration-management-vagrant-and-chef-at-lrug.html/feed/ 0
Vendor News, 7 August http://feedproxy.google.com/~r/TheBuildDoctor/~3/81Wm4QUCbkw/ http://feedproxy.google.com/~r/TheBuildDoctor/~3/81Wm4QUCbkw/#comments Sun, 07 Aug 2011 22:43:11 +0000 BuildDoctorSansLinks http://www.build-doctor.com/2011/08/07/vendor-news-7-august/ Vendor News, 7 August is a post from: The Build Doctor. Sponsored by AnthillPro, the build and deployment automation server that lets you release with confidence.

]]>
  • Bamboo 3.2 does more release features
  • Family Search (aka the LDS) have done a deal with Electric Cloud, and allegedly dropped their cycle time from 30 days to 40 minutes.
  • Urbancode (who sponsor this blog) have split their products up into a dev-friendly CI product, and an Ops friendly devops product.
  • There’s more news out there, but that’s all that hit my inbox.

    Vendor News, 7 August is a post from: The Build Doctor. Sponsored by AnthillPro, the build and deployment automation server that lets you release with confidence.

    Share: Digg del.icio.us DZone Slashdot

    ]]>
    http://feedproxy.google.com/~r/TheBuildDoctor/~3/81Wm4QUCbkw/feed/ 0
    Vim With Ruby Support Using Homebrew http://morethanseven.net/2011/07/31/Vim-with-ruby-support-using-homebrew.html http://morethanseven.net/2011/07/31/Vim-with-ruby-support-using-homebrew.html#comments Sat, 30 Jul 2011 23:00:00 +0000 Gareth Rushgrove tag:morethanseven.net,2011-07-31:/2011/07/31/Vim-with-ruby-support-using-homebrew.html I’ve spend a bit of time this weekend cleaning, tidying and upgrading software on my mac. While doing that I got round to compiling my own Vim. I’d been meaning to do this for a while, I prefer using Vim in a terminal to using MacVim, and I like having access to things like Command-T which requires Ruby support which the inbuild version lacks.

    Vim isn’t in Homebrew, because Homebrew’s policy is to not provide duplicates of already installed software. Enter Homebrew Alt which provides formulas for anything not allowed by the homebrew policy. As luck would have it a Vim Formula already exists. And installing from it couldn’t be easier.

    brew install https://raw.github.com/adamv/homebrew-alt/master/duplicates/vim.rb

    As it turns out this failed the first time I ran it because I had an rvm installed Ruby on my path. I reset this to the system version and everything compiled fine.

    rvm use system

    Note also that it’s really quite simple to use a different revision or different flags when compiling. Just download that file, modify it, serve it locally (say with a python one line web server) and point brew install at it. Next step, running off head for all the latest and greatest Vim features.

    ]]>
    http://morethanseven.net/2011/07/31/Vim-with-ruby-support-using-homebrew.html/feed/ 0
    Rich data on the CLI http://www.devco.net/archives/2011/07/29/rich-data-on-the-cli.php http://www.devco.net/archives/2011/07/29/rich-data-on-the-cli.php#comments Fri, 29 Jul 2011 20:30:18 +0000 R.I. Pienaar http://www.devco.net/?p=2197 I’ve often wondered how things will change in a world where everything is a REST API and how relevant our Unix CLI tool chain will be in the long run. I’ve known we needed CLI ways to interact with data – like JSON data – and have given this a lot of thought.

    MS Powershell does some pretty impressive object parsing on their CLI but I was never really sure how close we could get to that in Unix. I’ve wanted to start my journey with the grep utility as that seemed a natural starting point and my most used CLI tool.

    I have no idea how to write parsers and matchers but luckily I have a very talented programmer working for me who were able to take my ideas and realize them awesomely. Pieter wrote a json grep and I want to show off a few bits of what it can do.

    I’ll work with the document below:

    [
      {"name":"R.I.Pienaar",
       "contacts": [
                     {"protocol":"twitter", "address":"ripienaar"},
                     {"protocol":"email", "address":"[email protected]"},
                     {"protocol":"msisdn", "address":"1234567890"}
                   ]
      },
      {"name":"Pieter Loubser",
       "contacts": [
                     {"protocol":"twitter", "address":"pieterloubser"},
                     {"protocol":"email", "address":"[email protected]"},
                     {"protocol":"msisdn", "address":"1234567890"}
                   ]
      }
    ]

    There are a few interesting things to note about this data:

    • The document is an array of hashes, this maps well to the stream of data paradigm we know from lines of text in a file. This is the basic structure jgrep works on.
    • Each document has another nested set of documents in an array – the contacts array.

    Examples


    The examples below show a few possible grep use cases:

    A simple grep for a single key in the document:

    $ cat example.json | jgrep "name='R.I.Pienaar'"
    [
      {"name":"R.I.Pienaar",
       "contacts": [
                     {"protocol":"twitter", "address":"ripienaar"},
                     {"protocol":"email", "address":"[email protected]"},
                     {"protocol":"msisdn", "address":"1234567890"}
                   ]
      }
    ]

    We can extract a single key from the result:

    $ cat example.json | jgrep "name='R.I.Pienaar'" -s name
    R.I.Pienaar

    A simple grep for 2 keys in the document:

    % cat example.json | 
        jgrep "name='R.I.Pienaar' and contacts.protocol=twitter" -s name
    R.I.Pienaar

    The nested document pose a problem though, if we were to search for contacts.protocol=twitter and contacts.address=1234567890 we will get both documents and not none, that’s because in order to effectively search the sub documents we need to ensure that these 2 values exist in the same sub document.

    $ cat example.json | 
         jgrep "[contacts.protocol=twitter and contacts.address=1234567890]"

    Placing [] around the 2 terms works like () but restricts the search to the specific sub document. In this case there is no sub document in the contacts array that has both twitter and 1234567890.

    Of course you can have many search terms:

    % cat example.json | 
         jgrep "[contacts.protocol=twitter and contacts.address=1234567890] or name='R.I.Pienaar'" -s name
    R.I.Pienaar

    We can also construct entirely new documents:

    % cat example.json | jgrep "name='R.I.Pienaar'" -s "name contacts.address"
    [
      {
        "name": "R.I.Pienaar",
        "contacts.address": [
          "ripienaar",
          "[email protected]",
          "1234567890"
        ]
      }
    ]

    Real World

    So I am adding JSON output support to MCollective, today I was rolling out a new Nagios check script to my nodes and wanted to be sure they all had it. I used the File Manager agent to fetch the stats for my file from all the machines then printed the ones that didn’t match my expected MD5.

    $ mco rpc filemgr status file=/.../check_puppet.rb -j | 
       jgrep 'data.md5!=a4fdf7a8cc756d0455357b37501c24b5' -s sender
    box1.example.com

    Eventually you will be able to then pipe this output to mco again and call another agent, here I take all the machines that didn’t yet have the right file and cause a puppet run to happen on them, this is very Powershell like and the eventual use case I am building this for:

    $ mco rpc filemgr status file=/.../check_puppet.rb -j | 
       jgrep 'data.md5!=a4fdf7a8cc756d0455357b37501c24b5' |
       mco rpc puppetd runonce

    I also wanted to know the total size of a logfile across my web servers to be sure I would have enough space to copy them all:

    $ mco rpc filemgr status file=/var/log/httpd/access_log -W /apache/ -j |
        jgrep -s "data.size"|
        awk '{ SUM += $1} END { print SUM/1024/1024 " MB"}'
    2757.9093 MB

    Now how about interacting with a webservice like the GitHub API:

    $ curl -s http://github.com/api/v2/json/commits/list/puppetlabs/marionette-collective/master|
       jgrep --start commits "author.name='Pieter Loubser'" -s id
    52470fee0b9fe14fb63aeb344099d0c74eaf7513

    Here I fetched the most recent commits in the marionette-collective GitHub repository, searched for ones by Pieter and returns the ID of those commits. The –start argument is needed because the top of the JSON returned is not the array we care for. The –start tells jgrep to take the commits key and grep that.

    Or since it’s Sysadmin Appreciation Day how about tweets about it:

    % curl -s "http://search.twitter.com/search.json?q=sysadminday"|
       jgrep --start results -s "text"
     
    RT @RedHat_Training: Did you know that today is Systems Admin Day?  A big THANK YOU to all our system admins!  Here's to you!  http://t.co/ZQk8ifl
    RT @SinnerBOFH: #BOFHers RT @linuxfoundation: Happy #SysAdmin Day! You know who you are, rock stars. http://t.co/kR0dhhc #linux
    RT @google: Hey, sysadmins - thanks for all you do. May your pagers be silent and your users be clueful today! http://t.co/N2XzFgw
    RT @google: Hey, sysadmins - thanks for all you do. May your pagers be silent and your users be clueful today! http://t.co/y9TbCqb #sysadminday
    RT @mfujiwara: http://www.sysadminday.com/
    RT @mitchjoel: It's SysAdmin Day! Have you hugged your SysAdmin today? Make sure all employees follow the rules: http://bit.ly/17m98z #humor
    ? @mfujiwara: http://www.sysadminday.com/

    Here as before we have to grep the results array that is contained inside the results.

    I can also find all the restaurants near my village via SimpleGEO:

    curl -x localhost:8001 -s "http://api.simplegeo.com/1.0/places/51.476959,0.006759.json?category=Restaurant"|
       jgrep --start features "properties.distance<2.0" -s "properties.address \
                                          properties.name \
                                          properties.postcode \
                                          properties.phone \
                                          properties.distance"
    [
      {
        "properties.address": "15 Stratheden Road",
        "properties.distance": 0.773576114771768,
        "properties.phone": "+44 20 8858 8008",
        "properties.name": "The Lamplight",
        "properties.postcode": "SE3 7TH"
      },
      {
        "properties.address": "9 Stratheden Parade",
        "properties.distance": 0.870622234751732,
        "properties.phone": "+44 20 8858 0728",
        "properties.name": "Sun Ya",
        "properties.postcode": "SE3 7SX"
      }
    ]

    There’s a lot more I didn’t show, it supports all the usual <= etc operators and a fair few other bits.

    You can get this utility by installing the jgrep Ruby Gem or grab the code from GitHub. The Gem is a library so you can use these abilities in your ruby programs but also includes the CLI tool shown here.

    It’s pretty new code and we’d totally love feedback, bugs and ideas! Follow the author on Twitter at @pieterloubser and send him some appreciation too.

    ]]>
    http://www.devco.net/archives/2011/07/29/rich-data-on-the-cli.php/feed/ 0
    Jenkins Build Pipeline Example http://morethanseven.net/2011/07/24/Jenkins-build-pipeline-example.html http://morethanseven.net/2011/07/24/Jenkins-build-pipeline-example.html#comments Sat, 23 Jul 2011 23:00:00 +0000 Gareth Rushgrove tag:morethanseven.net,2011-07-24:/2011/07/24/Jenkins-build-pipeline-example.html The idea of a build pipeline for web application deployment appears to have picked up lots of interest from the excellent Continuous Delivery book. Inspired by that, some nice folks have build an excellent plugin for Jenkins unsurprisingly called the Build Pipeline Plugin. Here’s a quick example of how I’m using it for one of my projects*.

    Build pipeline example in Jenkins

    The pipeline is really just a visualisation of up and downstream builds in Jenkins given a starting point, plus the ability to setup manual steps rather than just the default build after ones. That means the steps are completely up to you and your project. In this case I’m using:

    1. Build – downloads the latest code and any dependencies. You could also create a system package here if you like. If successful triggers…
    2. Staging deploy – In this case I’m using capistrano, but it could easily have been rsync, fabric or triggering a chef or puppet run. If successful triggers…
    3. Staging test – This is a simple automated test suite that checks that the site on staging is correct. The tests are bundled with the code, so are pulled down as part of the build step. If the tests pass…
    4. Staging approval – This is one of the clever parts of the plugin. This jenkins job actually does nothing except log it’s successful activation. It’s only run when I press the Trigger button on the pipeline view. This acts as a nice manual gate for a once over check on staging.
    5. Production deploy – using the same artifact as deployed to staging this job triggers the deploy to the production site again via capistrano

    I’m triggering builds on each commit too via a webhook. But I can also kick off a build by clicking the button the pipeline view if I need to.

    Pipeline example showing build in progress

    Note that I’m only allowing the last build to be deployed given only that one can be checked on staging. Again this is configuration specific to my usage, the plugins lets you operate a number of different ways. There are a number of tweaks I want to make to this, mainly around experimenting with parameterized builds to pass useful information downstream and even allow parrallel execution. For the moment I have the Block build when upstream project is building flag checked on the deploy.

     * Yes, this is a one page site. With a 5 step build process in Jenkins including a small suite of functional tests and a staging environment. This is what we call overengineering.]]>
    http://morethanseven.net/2011/07/24/Jenkins-build-pipeline-example.html/feed/ 0
    The whole shebang http://feedproxy.google.com/~r/KennethKufluk/~3/f7jmZFpsGP0/ http://feedproxy.google.com/~r/KennethKufluk/~3/f7jmZFpsGP0/#comments Sat, 23 Jul 2011 21:44:38 +0000 bob http://kenneth.kufluk.com/blog/?p=949 Continue reading ]]> My colleagues have been amicably bickering, online and offline, about the use of hashbangs.

    Ben Cherry explains that HashBangs are necessary, though not pretty, temporary workarounds to the lack of pushState support, which is part of the HTML5 spec.
    http://www.adequatelygood.com/2011/2/Thoughts-on-the-Hashbang

    Dan Webb explains that this temporary workaround goes against the fundamental rule that “cool URIs don’t change”, and that they are bad for the web in general.
    http://danwebb.net/2011/5/28/it-is-about-the-hashbangs

    When we discuss this sort of thing, we’re talking about the web in general, not about any Twitter pages, URLs, sites or strategy. Our personal sites are for personal opinions.

    Dan and Ben have strong points, but I think the Internet is sufficiently large as to accommodate a variety of architectures. While building one way may simply be better, for excellent reasons, than another way, one is often forced to choose the lesser route due to external constraints.

    New Twitter is a good example: a clientside application built using JavaScript. Besides the obvious UI upgrade, there was also a desire to reduce the load on our servers, and leverage our own API. This enabled us to concentrate resources on scaling and bugfixing the API in the backend (good for everyone), while letting frontend developers iterate quickly against an agreed specification. It was a win-win architecture.

    The result is an application which needs to allow navigation without server calls. Thus the hashbang in the URL. It’s a necessary consequence of the requirements of the project.

    Dan does raise some excellent points that the hashbangs cause problems. Returning to a hypothetical example, we might still have reasons not to rearchitect back to the classical page-request method. So we should address the problems Dan raises directly.

    1 Spiders should execute JavaScript

    There’s essentially no reason why a spider shouldn’t be executing some form of JavaScript. It’s mature enough. We’re using JavaScript to write out our content, and so the spider should be able to execute our JavaScript before it evaluates our DOM for content. It merely needs to know when the page is ready for this to happen.

    2 Ajax pages need a ‘Page Loaded’ event

    Spiders and search indexers can and do sometimes implement JavaScript runtimes. However, even in this case there’s no well recognised way to say ‘this is a redirect’ or ‘this content is not found’ in a way that non-humans will understand.

    We just need this specification. Let us have a way to signal HTTP response code equivalents like 404, 500 and “Page complete” through manipulation of the DOM with JavaScript.

    3 Hashbangs are a fallback

    The original hashbang proposal from Google aliases hashbanged urls to _escaped_fragment_ querystring parameters, allowing it to read the content of hashbanged urls as a server-side rendered page. I don’t believe this is a good system.

    For a start, our linked URLs should be ‘cool’. It is only JavaScript that should add the hashbang, in browsers that need it. For example:

    <a href="/kpk">Kenneth</a>
    <script>if ($.browser.msie) $('a').click(function(){location.hash='!'+this.href;return false});</script>

    When Google reads this link, it should request ‘/kpk’. As our page is a clientside app, the server will respond with a redirect to ‘/’, then a pushState back to ‘/kpk’, before rendering the content to the DOM. If we can then trigger a ‘page loaded’ event, Google can start reading our content.

    4 Hashbangs should be invisible

    A hashbang is a temporary workaround for the lack of pushState. While our links should only be hashbanged for the non-pushState browsers, Google may still find a hashbang in a URL. As such, Google should regard it as invisible. The URL it saves should be the URL without a hashbang.

    For example,
    Spider finds link to http://twitter.com/#!kpk
    Spider saves link as http://twitter.com/kpk
    Spider finds link to http://twitter.com/#!kpk/mentions?page=10
    Spider saves link as http://twitter.com/kpk/mentions?page=10

    This lets us switch to Cool URLs with pushState when and where it becomes available. It lets us use canonical urls. And should we choose to revert to a client/server model later on, we are able to do so.

    5 Hashbangs should be eternally supported

    Once you hashbang, you can’t go back.

    Once a URL is made visible, you should be committed to maintaining that link forever. Fortunately, hashbanged URLs are easy to replace, if they’ve been designed invisibly. If you don’t use hashbangs anymore, just render them invisible with this script on your homepage:

    <script>if (location.hash.charAt(1)=='!') location.replace(location.hash.substr(2));></script>

     

    So, in summary, I suggest we (or rather, Google, as the authority over content reading) should accept the clientside application on its own terms, rather than as an alias to a server-side application. For while the client-server model has certain inherent advantages, there will still be occasions when a clientside application is more appropriate.

    We can support that by changing the rules on interpreting hashbangs, and by introducing a DOM element controlled by JavaScript that describes the page status.

    ]]>
    http://feedproxy.google.com/~r/KennethKufluk/~3/f7jmZFpsGP0/feed/ 0
    Devop http://feedproxy.google.com/~r/TheBuildDoctor/~3/PD-tjkmhoAM/ http://feedproxy.google.com/~r/TheBuildDoctor/~3/PD-tjkmhoAM/#comments Wed, 20 Jul 2011 11:30:05 +0000 BuildDoctorSansLinks http://www.build-doctor.com/2011/07/20/devop/ Devop is a post from: The Build Doctor. Sponsored by AnthillPro, the build and deployment automation server that lets you release with confidence.

    ]]>

    Tools and titles are not a substitute for understanding and collaboration, sadly.


    Devop is a post from: The Build Doctor. Sponsored by AnthillPro, the build and deployment automation server that lets you release with confidence.

    Share: Digg del.icio.us DZone Slashdot

    ]]>
    http://feedproxy.google.com/~r/TheBuildDoctor/~3/PD-tjkmhoAM/feed/ 0
    “the nerdiest thing I’ve seen all day” http://feedproxy.google.com/~r/KennethKufluk/~3/MBDot87wiuQ/ http://feedproxy.google.com/~r/KennethKufluk/~3/MBDot87wiuQ/#comments Tue, 19 Jul 2011 04:12:26 +0000 Kenneth http://kenneth.kufluk.com/blog/?p=945 Continue reading ]]> In between the hours spent coding the hardcore JavaScript behind New Twitter, I like to put together fun little JavaScript demos. They get shown, if they’re good enough, on the screens around the office.

    Today’s is a tribute to the almighty Matrix. Although they sadly never made any sequels, and the effects have been copied so often they now seem cliché, it was an outstanding film of its time.

    So here it is: a Twitter List timeline, shown as a Matrixy matrix:
    http://kenneth.kufluk.com/matrix/

    Initially, I wanted to have the tweets scrolling down the page using CSS animations. Sadly, my machine got completely overwhelmed with only a few streams. I would’ve turned to my friend the canvas, but our display screens are something like 1900×800 and slow, which makes full-screen canvas too jerky. So I just rendered a large set of absolutely positioned divs, one for each character.

    Reading the text is a bit tricky. I supposed that in an ideal world, the tweets would stream upwards, but I didn’t want to deviate so far from the original. And so the tweets stream down the screen, and are mostly unreadable. I have tried reversing the text, but that simply makes them harder to read. English is not a language designed for reading bottom to top.

    The image in the background is taken from a png. I draw it to a hidden canvas, but draw it scaled to the same dimensions (pixels) as the grid (characters). Then, I work out the visibility (brightness+opacity) of each (now-pixellated) pixel. This is added to the “bg_opacity” property of the grid cell, which is combined with the opacity of the current character to give the effect.

    It’s quite easy to swap out the list and the image, and to tune the character size, just by editing variables at the top of the script.

    The code is, naturally, on Github:
    https://github.com/kennethkufluk/Twitter-Matrix

    Have a play, have a tinker. Let me know how I could improve the effect, and improve the performance.

    Update:

    I jiggled the code around a bit.  Instead of absolutely positioning each character, and setting their content and opacity each time, I now insert a row of characters at a time.  The row has spans for setting each char’s opacity.  The bird image is now a translucent canvas overlay.

    Performance is significantly better, and you can use Inspector without crashing the browser, but the layering doesn’t seem to work reliably in Chrome.  I’m not sure why.  So sometimes you just don’t see the bird.

    And when you see super-wide Unicode characters, it can upset the grid a bit.  I haven’t worried about that.

    Update 2

    I took away “-webkit-transform-style: preserve-3d;” from the body tag, and the z-indexed layers now work properly.

    ]]>
    http://feedproxy.google.com/~r/KennethKufluk/~3/MBDot87wiuQ/feed/ 0