Home
Posted in Show & Tell
Yesterday I worked on a script/style to pixelate photos. I think I've come up with something that works reasonably well. Here are a few examples.
One of the biggest challenges in pixelating an image is adjusting pixel color from the original photo. I came up with a palette of 58 colors that seem to work reasonably well. The resulting photos preserve a lot of the original colors but they tend to be brighter than non-colorized pixelations.
Posted in Show & Tell
I've been getting into the NFT space lately. There are a lot of skeptics but it feels like the art world is currently and will continue to be disrupted by NFT markets.
I would never advocate for people to buy and burn physical art with the purpose of replacing it with a digital copy. The Banksy stunt was interesting but likely unsustainable. In the future, physical art and NFTs will have more of a symbiotic relationship. Sure people will continue to own physical paintings. But NFTs are a far better alternative to selling prints.
NFTs will hold so much more value than a typical physical print. Having a verifiable receipt for an NFT makes prints essentially resellable. Without that, I personally would never consider buying a 2nd hand print for much more than the cost of production.
Making prints resellable (with royalties) will be game-changing for artists in so many ways. Artists can now build recurring revenue while generating demand for their originals.
While the art community has not fully embraced the blockchain, it really is only a matter of time. Artists who choose to adopt technology will quickly gain a competitive edge. Eventually, other traditional artists will come on as late adopters. Over time, new artists will be more and more likely to embrace technology from day 1.
I'm also finding that NFT markets are very raw in a lot of areas. Particularly, they are not doing a great job of enabling an artist to tell their story. There are a lot of reasons for this. New markets such as OpenSea, and Rarible are focused more on the mechanics of buying and selling art on a blockchain than the storytelling process.
This opens the door for an app that avoids the complexities of dealing with the blockchain while focusing on storytelling. That's why I've decided to build Raster.ly. More to come. Stay tuned!
I would never advocate for people to buy and burn physical art with the purpose of replacing it with a digital copy. The Banksy stunt was interesting but likely unsustainable. In the future, physical art and NFTs will have more of a symbiotic relationship. Sure people will continue to own physical paintings. But NFTs are a far better alternative to selling prints.
NFTs will hold so much more value than a typical physical print. Having a verifiable receipt for an NFT makes prints essentially resellable. Without that, I personally would never consider buying a 2nd hand print for much more than the cost of production.
Making prints resellable (with royalties) will be game-changing for artists in so many ways. Artists can now build recurring revenue while generating demand for their originals.
While the art community has not fully embraced the blockchain, it really is only a matter of time. Artists who choose to adopt technology will quickly gain a competitive edge. Eventually, other traditional artists will come on as late adopters. Over time, new artists will be more and more likely to embrace technology from day 1.
I'm also finding that NFT markets are very raw in a lot of areas. Particularly, they are not doing a great job of enabling an artist to tell their story. There are a lot of reasons for this. New markets such as OpenSea, and Rarible are focused more on the mechanics of buying and selling art on a blockchain than the storytelling process.
This opens the door for an app that avoids the complexities of dealing with the blockchain while focusing on storytelling. That's why I've decided to build Raster.ly. More to come. Stay tuned!
Posted in Inspiration
Posted in Inspiration
PXON (pronounced like "picks on" as in "I hate it that Jenn always 'picks on' me."1) is a proposed standard, which no one needs or even asked for, of the representation of pixel art using JSON and properties of the Exif RDF schema. It provides the means for both lightweight data-interchange and the object-oriented creation of pixel art.
How could you not be intrigued by a proposed standard, which no one needs or even asked for?
Posted in Ask an Expert
Let's say I'm writing a screenplay in Google Docs. Using a standard screenplay format, in order to add a character name, to change who's speaking, I have to:
- Unindent any previous indentations
- Center-justify the text
- Turn on all caps
- Type the name
Then to type dialogue, I:
- Left-justify the text
- Turn off caps lock
- Indent by one
- Type the dialogue
*Then* if I want background info, I have to:
- De-indent the text
- Type the description
As you might imagine, this gets *tedious* over the course of switching between multiple content types (and those aren't even *all of them*).
How do I make this easier, without changing the format itself? Any tips?
Posted in Random
Don't call it a comeback! The client-server model has been here for years.
Let's start with a story: I have no fewer than a dozen internet-connected devices with screens, in my home with a family of 3 including myself. I'm starting to think that having an independent, fully-functional "brain" in each one is a stupid idea.
Why not take the client-server model that's used on the web, and apply it to computers again, like the mainframe days? You could have a "core" of computing power in a designated (and probably air-conditioned) space in your home, with a bunch of screens that connect to it. You can even have a VPN/SSH tunnel that goes from your devices to your core when you're away...
Thoughts? cc
Dane Lyons
Let's start with a story: I have no fewer than a dozen internet-connected devices with screens, in my home with a family of 3 including myself. I'm starting to think that having an independent, fully-functional "brain" in each one is a stupid idea.
Why not take the client-server model that's used on the web, and apply it to computers again, like the mainframe days? You could have a "core" of computing power in a designated (and probably air-conditioned) space in your home, with a bunch of screens that connect to it. You can even have a VPN/SSH tunnel that goes from your devices to your core when you're away...
Thoughts? cc
Posted in Random
I'd *love* to build a box that runs on a RISC-V processor -- the whole open source hardware aspect is really appealing (as are the tons of general purpose registers available...but that's another post). Point is:
Is it possible to build a RISC-V system right now? What about ARM?
(I know, the new Macs are using ARM, but I'm thinking about a desktop setup.)
Is it possible to build a RISC-V system right now? What about ARM?
(I know, the new Macs are using ARM, but I'm thinking about a desktop setup.)
Posted in Ideas
My current project, I don't have a working demo yet, so I list it under "Ideas" instead of "Show & Tell". Basically, it's going to be a web app that hooks up to a MIDI keyboard, and generates artwork based on what you play.
I'm starting with a simple visualization, or at least, one that's simple in my mind and relatively simple mathematically. I draw concentric circles with radii that vary per circle, meaning i extend and retract sections of the circle based on what you play per bar. Every bar of music (if it were sheet music) I expand the circles, zoom out a bit. I plan on animating these transitions and playing the music alongside it, in the final product, but for now just the visualization is a challenge!
To put it another way: imagine you're starting to play. The cursor, the drawing, is at [0, 10] (for a circle with an initial radius of 10). As you play, the cursor follows the circle, going outward for higher notes and inward for lower notes. Once you complete a bar or a measure or however long the circle lasts, you start a new circle outside the original one and the viewport zooms out a bit.
Later on, I might be able to vary color based on octave, or key velocity, or something like that, but for now, just trying to get paint on the canvas, quite literally.
Thoughts?
I'm starting with a simple visualization, or at least, one that's simple in my mind and relatively simple mathematically. I draw concentric circles with radii that vary per circle, meaning i extend and retract sections of the circle based on what you play per bar. Every bar of music (if it were sheet music) I expand the circles, zoom out a bit. I plan on animating these transitions and playing the music alongside it, in the final product, but for now just the visualization is a challenge!
To put it another way: imagine you're starting to play. The cursor, the drawing, is at [0, 10] (for a circle with an initial radius of 10). As you play, the cursor follows the circle, going outward for higher notes and inward for lower notes. Once you complete a bar or a measure or however long the circle lasts, you start a new circle outside the original one and the viewport zooms out a bit.
Later on, I might be able to vary color based on octave, or key velocity, or something like that, but for now, just trying to get paint on the canvas, quite literally.
Thoughts?
Posted in Inspiration
Posted in Show & Tell
Traditional gradients are fine. But I find that they often...
- Lack realism
- Feel too flat
- Lack texture
- Suffer from color banding
- Suffer from gray zone transitions
A common solution to some of these problems is to just add noise. There are various noise algorithms.
But a lot of the noise algorithms just don't produce an effect I'm going for. They often feel rather "cloudy". So I set out to create a new technique from scratch. I call it "Color Tunneling" because it reminds me of quantum tunneling. I'll take you through the process without using code.
Step 1:
Start with a basic gradient. Here is a very zoomed-in example to get us started.
Step 2:
Grab a random pixel and take note of its color and position.
Step 3:
Find an adjacent row and column of pixels. I usually randomly go above or below the target pixel and to the left or right of the pixel. In this case, I'm going above and to the left.
Step 4:
Paint all the pixels in the selected row and column, the same color as the target pixel. In this case, only the pixels in the vertical column were affected.
Step 5:
Repeat the process by selecting a new random pixel. Here is a new pixel with a row to the right of the target pixel and a column below the target pixel.
Step 6:
Paint the new row and column using the target pixel color.
Step 7:
Repeat the process. In this case, we're adding a vertical column above the pixel, and a horizontal row to the right.
Step 8:
Again, paint the row and column.
Steps 9-20:
Continue repeating the process. The length and width of columns will create very different effects. If the lines are too long, it creates a very random effect of blending everything together. Medium lines create a fabric-like effect. Short lines create a noise-like effect.
Steps 21-30:
More of the same. You'll probably notice the selected pixels aren't very random. In this tutorial, I'm grabbing pixels in a very specific section of the gradient to illustrate the technique. Notice how the horizontal banding is quickly broken up when the new lines are introduced.
Just continue the process potentially millions of times depending on the size of the image and the desired effect. I'm using the effect to create a series of NFT images using random gradients. Here are a few examples:
Here is a gradient that uses the same colors in this tutorial but with 1 million vertical and horizontal lines added.
Here is another textured gradient that uses a similar technique but with a few extra steps to add more color.
Sometimes the technique tends to be a little harsh visually. It is possible to go more light and airy though. If you go too subtle with the colors, everything sort of blends together and it isn't easy to see the technique in action. Here is an option that is a little softer but still with visible lines.
Here is a slightly more subtle version. So far I haven't tried to go too subtle with the technique. It is possible to use this to break up banding and create really smooth-looking gradients. If the goal is to go buttery smooth, then a more traditional noise algorithm might be better.
I hope you find this technique to be a useful alternative to adding texture to gradients. It can be harsh but there are ways to be more subtle. You can always use blending options and play with the parameters until you get the desired effect.
I didn't dive into code here because I really just wanted to go over the technique at a high level. I plan to open-source some of the code in the future if anyone is interested.