{"version":"1.0","provider_name":"Harvard Gazette","provider_url":"https:\/\/dev.news.harvard.edu\/gazette","author_name":"harvardgazette","author_url":"https:\/\/dev.news.harvard.edu\/gazette\/story\/author\/harvardgazette\/","title":"Perfecting digital imaging &#8212; Harvard Gazette","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"KhsWLLbV8i\"><a href=\"https:\/\/dev.news.harvard.edu\/gazette\/story\/2013\/07\/perfecting-digital-imaging\/\">Perfecting digital imaging<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/dev.news.harvard.edu\/gazette\/story\/2013\/07\/perfecting-digital-imaging\/embed\/#?secret=KhsWLLbV8i\" width=\"600\" height=\"338\" title=\"&#8220;Perfecting digital imaging&#8221; &#8212; Harvard Gazette\" data-secret=\"KhsWLLbV8i\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script>\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/\/# sourceURL=https:\/\/dev.news.harvard.edu\/wp-includes\/js\/wp-embed.min.js\n<\/script>\n","thumbnail_url":"https:\/\/dev.news.harvard.edu\/wp-content\/uploads\/2013\/07\/seas_rendered-translucent-materials5_605.jpg","thumbnail_width":605,"thumbnail_height":403,"description":"Despite advances, the best software and video cameras cannot seem to get computer-generated images and digital film to look exactly the way our eyes expect them to. Harvard's Hanspeter Pfister and Todd Zickler are working to narrow the gap between \u201cvirtual\u201d and \u201creal\u201d by asking the question: How do we see what we see?"}