Tag: Browser

IE Down, Edge Up… Global Browser Usage Stats Are for Cocktail Parties and Conference Slides

I enjoy articles like Hartley Charlton’s “Microsoft Edge Looks Set to Overtake Safari as World’s Second Most Popular Desktop Browser.” It’s juicy! We know these massive players in the browser market care very much about their market share, so when one passes another it’s news. Like an Olympic speed skater favored for the gold getting a bronze instead, or the like.

Microsoft Edge is now used on 9.54 percent of desktops worldwide, a mere 0.3 percent behind Apple’s Safari, which stands at 9.84 percent. Google Chrome continues to hold first place with an overwhelming 65.38 percent of the market. Mozilla Firefox takes fourth place with 9.18 percent.

In January 2021, Safari held a 10.38 percent market share and appears to be gradually losing users to rival browsers over time. If the trend continues, Apple is likely to slip to third or fourth place in the near future.

Scoping the data down even by continent is entirely different. Like in Europe, Edge has already passed Safari, but in North America, the gap is still 5%.

Source: MacRumors.com

What does it matter to you or me? Nothing, I hope. These global stats should mean very little to us, outside a little casual nerdy cocktail party chatter. Please don’t make decisions about what to support and not support based on global statistics. Put some kind of basic analytics in place on your site, get data from actual visits, and make choices on that data. That’s the only data that matters.

Alan Dávalos’ “The baseline for web development in 2022” paints a picture of what we should be supporting based again on global browser usage statistics.

Globally, IE’s current market share is under 0.5%. And even in Japan, which has a higher market share of IE compared to other countries, IE’s market share is close to 2% and has a downward tendency.

Until now we kept supporting IE due to its market share. But now, there are basically no good reasons to keep supporting IE.

Again it seems so bizarre to me that any of us would make a choice on what to support based on a global usage statistic. Even when huge players make choices, they do it based on their own data. When Google “dropped” IE 11 (they still serve a perfectly fine baseline experience), they “did the math.” WordPress, famously powering somewhere in the “a third of the whole internet” range, factored in usage of their own product.

Even if you’re building a brand new product and trying to make these choices, you’ll have analytic data soon enough, and can make future-facing support choices based on that as it rolls in.


IE Down, Edge Up… Global Browser Usage Stats Are for Cocktail Parties and Conference Slides originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , , , , , ,

Comparing Node JavaScript to JavaScript in the Browser

Being able to understand Node continues to be an important skill if you’re a front-end developer. Deno has arrived as another way to run JavaScript outside the browser, but the huge ecosystem of tools and software built with Node mean it’s not going anywhere anytime soon.

If you’ve mainly written JavaScript that runs in the browser and you’re looking to get more of an understanding of the server side, many articles will tell you that Node JavaScript is a great way to write server-side code and capitalize on your JavaScript experience.

I agree, but there are a lot of challenges jumping into Node.js, even if you’re experienced at authoring client-side JavaScript. This article assumes you’ve got Node installed, and you’ve used it to build front-end apps, but want to write your own APIs and tools using Node.

For a beginners explanation of Node and npm you can check out Jamie Corkhill’s “Getting Started With Node” on Smashing Magazine.

Asynchronous JavaScript

We don’t need to write a whole lot of asynchronous code on the browser. The most common usage of asynchronous code on the browser is fetching data from an API using fetch (or XMLHttpRequest if you’re old-school). Other uses of async code might include using setInterval, setTimeout, or responding to user input events, but we can get pretty far writing JavaScript UI without being asynchronous JavaScript geniuses.

If you’re using Node, you will nearly always be writing asynchronous code. From the beginning, Node has been built to leverage a single-threaded event loop using asynchronous callbacks. The Node team blogged in 2011 about how “Node.js promotes an asynchronous coding style from the ground up.” In Ryan Dahl’s talk announcing Node.js in 2009, he talks about the performance benefits of doubling down on asynchronous JavaScript.

The asynchronous-first style is part of the reason Node gained popularity over other attempts at server-side JavaScript implementations such as Netscape’s application servers or Narwhal. However, being forced to write asynchronous JavaScript might cause friction if you aren’t ready for it.

Setting up an example

Let’s say we’re writing a quiz app. We’re going to allow users to build quizes out of multichoice questions to test their friends’ knowledge. You can find a more complete version of what we’ll build at this GitHub repo. You could also clone the entire front-end and back-end to see how it all fits together, or you can take a look at this CodeSandbox (run npm run start to fire it up) and get an idea of what we’re making from there.

Screenshot of a quiz editor written in Node JavaScript that contains four inputs two checkboxes and four buttons.

The quizzes in our app will consist of a bunch of questions, and each of these questions will have a number of answers to choose from, with only one answer being correct.

We can hold this data in an SQLite database. Our database will contain:

  • A table for quizzes with two columns:
    • an integer ID
    • a text title
  • A table for questions with three columns:
    • an integer ID
    • body text
    • An integer reference matching the ID of the quiz each question belongs to
  • A table for answers with four columns:
    • an integer ID
    • body text
    • whether the answer is correct or not
    • an integer reference matching the ID of the question each answer belongs to

SQLite doesn’t have a boolean data type, so we can hold whether an answer is correct in an integer where 0 is false and 1 is true.

First, we’ll need to initialize npm and install the sqlite3 npm package from the command line:

npm init -y npm install sqlite3

This will create a package.json file. Let’s edit it and add:

"type":"module"

To the top-level JSON object. This will allow us to use modern ES6 module syntax. Now we can create a node script to set up our tables. Let’s call our script migrate.js.

// migrate.js  import sqlite3 from "sqlite3";   let db = new sqlite3.Database("quiz.db");     db.serialize(function () {       // Setting up our tables:       db.run("CREATE TABLE quiz (quizid INTEGER PRIMARY KEY, title TEXT)");       db.run("CREATE TABLE question (questionid INTEGER PRIMARY KEY, body TEXT, questionquiz INTEGER, FOREIGN KEY(questionquiz) REFERENCES quiz(quizid))");       db.run("CREATE TABLE answer (answerid INTEGER PRIMARY KEY, body TEXT, iscorrect INTEGER, answerquestion INTEGER, FOREIGN KEY(answerquestion) REFERENCES question(questionid))");       // Create a quiz with an id of 0 and a title "my quiz"        db.run("INSERT INTO quiz VALUES(0,"my quiz")");       // Create a question with an id of 0, a question body       // and a link to the quiz using the id 0       db.run("INSERT INTO question VALUES(0,"What is the capital of France?", 0)");       // Create four answers with unique ids, answer bodies, an integer for whether       // they're correct or not, and a link to the first question using the id 0       db.run("INSERT INTO answer VALUES(0,"Madrid",0, 0)");       db.run("INSERT INTO answer VALUES(1,"Paris",1, 0)");       db.run("INSERT INTO answer VALUES(2,"London",0, 0)");       db.run("INSERT INTO answer VALUES(3,"Amsterdam",0, 0)");   }); db.close();

I’m not going to explain this code in detail, but it creates the tables we need to hold our data. It will also create a quiz, a question, and four answers, and store all of this in a file called quiz.db. After saving this file, we can run our script from the command line using this command:

node migrate.js

If you like, you can open the database file using a tool like DB Browser for SQLite to double check that the data has been created.

Changing the way you write JavaScript

Let’s write some code to query the data we’ve created.

Create a new file and call it index.js .To access our database, we can import sqlite3, create a new sqlite3.Database, and pass the database file path as an argument. On this database object, we can call the get function, passing in an SQL string to select our quiz and a callback that will log the result:

// index.js import sqlite3 from "sqlite3";  let db = new sqlite3.Database("quiz.db");  db.get(`SELECT * FROM quiz WHERE quizid  = 0`, (err, row) => {   if (err) {     console.error(err.message);   }   console.log(row);   db.close(); });

Running this should print { quizid: 0, title: 'my quiz' } in the console.

How not to use callbacks

Now let’s wrap this code in a function where we can pass the ID in as an argument; we want to access any quiz by its ID. This function will return the database row object we get from db.

Here’s where we start running into trouble. We can’t simply return the object inside of the callback we pass to db and walk away. This won’t change what our outer function returns. Instead, you might think we can create a variable (let’s call it result) in the outer function and reassign this variable in the callback. Here is how we might attempt this:

// index.js // Be warned! This code contains BUGS import sqlite3 from "sqlite3";  function getQuiz(id) {   let db = new sqlite3.Database("quiz.db");   let result;   db.get(`SELECT * FROM quiz WHERE quizid  = ?`, [id], (err, row) => {     if (err) {       return console.error(err.message);     }     db.close();     result = row;   });   return result; } console.log(getQuiz(0));

If you run this code, the console log will print out undefined! What happened?

We’ve run into a disconnect between how we expect JavaScript to run (top to bottom), and how asynchronous callbacks run. The getQuiz function in the above example runs like this:

  1. We declare the result variable with let result;. We haven’t assigned anything to this variable so its value is undefined.
  2. We call the db.get() function. We pass it an SQL string, the ID, and a callback. But our callback won’t run yet! Instead, the SQLite package starts a task in the background to read from the quiz.db file. Reading from the file system takes a relatively long time, so this API lets our user code move to the next line while Node.js reads from the disk in the background.
  3. Our function returns result. As our callback hasn’t run yet, result still holds a value of undefined.
  4. SQLite finishes reading from the file system and runs the callback we passed, closing the database and assigning the row to the result variable. Assigning this variable makes no difference as the function has already returned its result.

Passing in callbacks

How do we fix this? Before 2015, the way to fix this would be to use callbacks. Instead of only passing the quiz ID to our function, we pass the quiz ID and a callback which will receive the row object as an argument.

Here’s how this looks:

// index.js import sqlite3 from "sqlite3"; function getQuiz(id, callback) {   let db = new sqlite3.Database("quiz.db");   db.get(`SELECT * FROM quiz WHERE quizid  = ?`, [id], (err, row) => {     if (err) {        console.error(err.message);     }     else {        callback(row);     }     db.close();   }); } getQuiz(0,(quiz)=>{   console.log(quiz); });

That does it. It’s a subtle difference, and one that forces you to change the way your user code looks, but it means now our console.log runs after the query is complete.

Callback hell

But what if we need to do multiple consecutive asynchronous calls? For instance, what if we were trying to find out which quiz an answer belonged to, and we only had the ID of the answer.

First, I’m going to refactor getQuiz to a more general get function, so we can pass in the table and column to query, as well as the ID:

Unfortunately, we are unable to use the (more secure) SQL parameters for parameterizing the table name, so we’re going to switch to using a template string instead. In production code you would need to scrub this string to prevent SQL injection.

function get(params, callback) {   // In production these strings should be scrubbed to prevent SQL injection   const { table, column, value } = params;   let db = new sqlite3.Database("quiz.db");   db.get(`SELECT * FROM $ {table} WHERE $ {column} = $ {value}`, (err, row) => {     callback(err, row);     db.close();   }); }

Another issue is that there might be an error reading from the database. Our user code will need to know whether each database query has had an error; otherwise it shouldn’t continue querying the data. We’ll use the Node.js convention of passing an error object as the first argument of our callback. Then we can check if there’s an error before moving forward.

Let’s take our answer with an id of 2 and check which quiz it belongs to. Here’s how we can do this with callbacks:

// index.js import sqlite3 from "sqlite3";  function get(params, callback) {   // In production these strings should be scrubbed to prevent SQL injection   const { table, column, value } = params;   let db = new sqlite3.Database("quiz.db");   db.get(`SELECT * FROM $ {table} WHERE $ {column} = $ {value}`, (err, row) => {     callback(err, row);     db.close();   }); }  get({ table: "answer", column: "answerid", value: 2 }, (err, answer) => {   if (err) {     console.log(err);   } else {     get(       { table: "question", column: "questionid", value: answer.answerquestion },       (err, question) => {         if (err) {           console.log(err);         } else {           get(             { table: "quiz", column: "quizid", value: question.questionquiz },             (err, quiz) => {               if (err) {                 console.log(err);               } else {                 // This is the quiz our answer belongs to                 console.log(quiz);               }             }           );         }       }     );   } });

Woah, that’s a lot of nesting! Every time we get an answer back from the database, we have to add two layers of nesting — one to check for an error, and one for the next callback. As we chain more and more asynchronous calls our code gets deeper and deeper.

We could partially prevent this by using named functions instead of anonymous functions, which would keep the nesting lower, but make our code our code less concise. We’d also have to think of names for all of these intermediate functions. Thankfully, promises arrived in Node back in 2015 to help with chained asynchronous calls like this.

Promises

Wrapping asynchronous tasks with promises allows you to prevent a lot of the nesting in the previous example. Rather than having deeper and deeper nested callbacks, we can pass a callback to a Promise’s then function.

First, let’s change our get function so it wraps the database query with a Promise:

// index.js import sqlite3 from "sqlite3"; function get(params) {   // In production these strings should be scrubbed to prevent SQL injection   const { table, column, value } = params;   let db = new sqlite3.Database("quiz.db");    return new Promise(function (resolve, reject) {     db.get(`SELECT * FROM $ {table} WHERE $ {column} = $ {value}`, (err, row) => {       if (err) {         return reject(err);       }       db.close();       resolve(row);     });   }); }

Now our code to search for which quiz an answer is a part of can look like this:

get({ table: "answer", column: "answerid", value: 2 })   .then((answer) => {     return get({       table: "question",       column: "questionid",       value: answer.answerquestion,     });   })   .then((question) => {     return get({       table: "quiz",       column: "quizid",       value: question.questionquiz,     });   })   .then((quiz) => {     console.log(quiz);   })   .catch((error) => {     console.log(error);   } );

That’s a much nicer way to handle our asynchronous code. And we no longer have to individually handle errors for each call, but can use the catch function to handle any errors that happen in our chain of functions.

We still need to write a lot of callbacks to get this working. Thankfully, there’s a newer API to help! When Node 7.6.0 was released, it updated its JavaScript engine to V8 5.5 which includes the ability to write ES2017 async/await functions.

Async/Await

With async/await we can write our asynchronouse code almost the same way we write synchronous code. Sarah Drasner has a great post explaining async/await.

When you have a function that returns a Promise, you can use the await keyword before calling it, and it will prevent your code from moving to the next line until the Promise is resolved. As we’ve already refactored the get() function to return a promise, we only need to change our user-code:

async function printQuizFromAnswer() {   const answer = await get({ table: "answer", column: "answerid", value: 2 });   const question = await get({     table: "question",     column: "questionid",     value: answer.answerquestion,   });   const quiz = await get({     table: "quiz",     column: "quizid",     value: question.questionquiz,   });   console.log(quiz); }  printQuizFromAnswer();

This looks much more familiar to code that we’re used to reading. Just this year, Node released top-level await. This means we can make this example even more concise by removing the printQuizFromAnswer() function wrapping our get() function calls.

Now we have concise code that will sequentially perform each of these asynchronous tasks. We would also be able to simultaneously fire off other asynchronous functions (like reading from files, or responding to HTTP requests) while we’re waiting for this code to run. This is the benefit of all the asynchronous style.

As there are so many asynchronous tasks in Node, such as reading from the network or accessing a database or filesystem. It’s especially important to understand these concepts. It also has a bit of a learning curve.

Using SQL to its full potential

There’s an even better way! Instead of having to worry about these asynchronous calls to get each piece of data, we could use SQL to grab all the data we need in one big query. We can do this with an SQL JOIN query:

// index.js import sqlite3 from "sqlite3";  function quizFromAnswer(answerid, callback) {   let db = new sqlite3.Database("quiz.db");   db.get(     `SELECT *,a.body AS answerbody, ques.body AS questionbody FROM answer a      INNER JOIN question ques ON a.answerquestion=ques.questionid      INNER JOIN quiz quiz ON ques.questionquiz = quiz.quizid      WHERE a.answerid = ?;`,     [answerid],     (err, row) => {       if (err) {         console.log(err);       }       callback(err, row);       db.close();     }   ); } quizFromAnswer(2, (e, r) => {   console.log(r); });

This will return us all the data we need about our answer, question, and quiz in one big object. We’ve also renamed each body column for answers and questions to answerbody and questionbody to differentiate them. As you can see, dropping more logic into the database layer can simplify your JavaScript (as well as possibly improve performance).

If you’re using a relational database like SQLite, then you have a whole other language to learn, with a whole lot of different features that could save time and effort and increase performance. This adds more to the pile of things to learn for writing Node.

Node APIs and conventions

There are a lot of new node APIs to learn when switching from browser code to Node.js.

Any database connections and/or reads of the filesystem use APIs that we don’t have in the browser (yet). We also have new APIs to set up HTTP servers. We can make checks on the operating system using the OS module, and we can encrypt data with the Crypto module. Also, to make an HTTP request from node (something we do in the browser all the time), we don’t have a fetch or XMLHttpRequest function. Instead, we need to import the https module. However, a recent pull request in the node.js repository shows that fetch in node appears to be on the way! There are still many mismatches between browser and Node APIs. This is one of the problems that Deno has set out to solve.

We also need to know about Node conventions, including the package.json file. Most front-end developers will be pretty familiar with this if they’ve used build tools. If you’re looking to publish a library, the part you might not be used to is the main property in the package.json file. This property contains a path that will point to the entry-point of the library.

There are also conventions like error-first callbacks: where a Node API will take a callback which takes an error as the first argument and the result as the second argument. You could see this earlier in our database code and below using the readFile function.

import fs from 'fs';  fs.readFile('myfile.txt', 'utf8' , (err, data) => {   if (err) {     console.error(err)     return   }   console.log(data) })

Different types of modules

Earlier on, I casually instructed you to throw "type":"module" in your package.json file to get the code samples working. When Node was created in 2009, the creators needed a module system, but none existed in the JavaScript specification. They came up with Common.js modules to solve this problem. In 2015, a module spec was introduced to JavaScript, causing Node.js to have a module system that was different from native JavaScript modules. After a herculean effort from the Node team we are now able to use these native JavaScript modules in Node.

Unfortunately, this means a lot of blog posts and resources will be written using the older module system. It also means that many npm packages won’t use native JavaScript modules, and sometimes there will be libraries that use native JavaScript modules in incompatible ways!

Other concerns

There are a few other concerns we need to think about when writing Node. If you’re running a Node server and there is a fatal exception, the server will terminate and will stop responding to any requests. This means if you make a mistake that’s bad enough on a Node server, your app is broken for everyone. This is different from client-side JavaScript where an edge-case that causes a fatal bug is experienced by one user at a time, and that user has the option of refreshing the page.

Security is something we should already be worried about in the front end with cross-site scripting and cross-site request forgery. But a back-end server has a wider surface area for attacks with vulnerabilities including brute force attacks and SQL injection. If you’re storing and accessing people’s information with Node you’ve got a big responsibility to keep their data safe.

Conclusion

Node is a great way to use your JavaScript skills to build servers and command line tools. JavaScript is a user-friendly language we’re used to writing. And Node’s async-first nature means you can smash through concurrent tasks quickly. But there are a lot of new things to learn when getting started. Here are the resources I wish I saw before jumping in:

And if you are planning to hold data in an SQL database, read up on SQL Basics.


Comparing Node JavaScript to JavaScript in the Browser originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Should CSS Override Default Browser Styles?

CSS overrides can change the default look of almost anything:

  • You can use CSS to override what a checkbox or radio button looks like, but if you don’t, the checkbox will look like a default checkbox on your operating system and some would say that’s best for accessibility and usability.
  • You can use CSS to override what a select menu looks like, but if you don’t, the select will look like a default select menu on your operating system and some would say that’s best for accessibility and usability.
  • You can override what anchor links look like, but some would say they should be blue with underlines because that is the default and it’s best for accessibility and usability.
  • You can override what scrollbars look like, but if you don’t, the scrollbars will look (and behave) the way default scrollbars do on your operating system, and some would say that’s best for accessibility and usability.

It just goes on and on…

In my experience, everyone has a different line. Nearly everybody styles their buttons. Nearly everybody styles their links, but some might only customize the hue of blue and leave the underline, drawing the line at more elaborate changes. It’s fairly popular to style form elements like checkboxes, radio buttons, and selects, but some people draw the line before that.

Some people draw a line saying you should never change a default cursor, some push that line back to make the cursor into a pointer for created interactive elements, some push that line so far they are OK with custom images as cursors. Some people draw the line with scrollbars saying they should never be customized, while some people implement elaborate designs.

CSS is a language for changing the design of websites. Every ruleset you write likely changes the defaults of something. The lines are relatively fuzzy, but I’d say there is nothing in CSS that should be outright banned from use — it’s more about the styling choices you make. So when you do choose to style something, it remains usable and accessible. Heck, background-color can be terribly abused making for inaccessible and unusable areas of a site, but nobody raises pitchforks over that.


Should CSS Override Default Browser Styles? originally published on CSS-Tricks

CSS-Tricks

, , , ,
[Top]

How to Create a Browser Extension

I’ll bet you are using browser extensions right now. Some of them are extremely popular and useful, like ad blockers, password managers, and PDF viewers. These extensions (or “add-ons”) are not limited to those purposes — you can do a lot more with them! In this article, I will give you an introduction on how to create one. Ultimately, we’ll make it work in multiple browsers.

What we’re making

We’re making an extension called “Transcribers of Reddit” and it’s going to improve Reddit’s accessibility by moving specific comments to the top of the comment section and adding aria- attributes for screen readers. We will also take our extension a little further with options for adding borders and backgrounds to comments for better text contrast.

The whole idea is that you’ll get a nice introduction for how to develop a browser extension. We will start by creating the extension for Chromium-based browsers (e.g. Google Chrome, Microsoft Edge, Brave, etc.). In a future post we will port the extension to work with Firefox, as well as Safari which recently added support for Web Extensions in both the MacOS and iOS versions of the browser.

Ready? Let’s take this one step at a time.

Create a working directory

Before anything else, we need a working space for our project. All we really need is to create a folder and give it a name (which I’m calling transcribers-of-reddit). Then, create another folder inside that one named src for our source code.

Define the entry point

The entry point is a file that contains general information about the extension (i.e. extension name, description, etc.) and defines permissions or scripts to execute.

Our entry point can be a manifest.json file located in the src folder we just created. In it, let’s add the following three properties:

{   "manifest_version": 3,   "name": "Transcribers of Reddit",   "version": "1.0" }

The manifest_version is similar to version in npm or Node. It defines what APIs are available (or not). We’re going to work on the bleeding edge and use the latest version, 3 (also known as as mv3).

The second property is name and it specifies our extension name. This name is what’s displayed everywhere our extension appears, like Chrome Web Store and the chrome://extensions page in the Chrome browser.

Then there’s version. It labels the extension with a version number. Keep in mind that this property (in contrast to manifest_version) is a string that can only contain numbers and dots (e.g. 1.3.5).

More manifest.json information

There’s actually a lot more we can add to help add context to our extension. For example, we can provide a description that explains what the extension does. It’s a good idea to provide these sorts of things, as it gives users a better idea of what they’re getting into when they use it.

In this case, we’re not only adding a description, but supplying icons and a web address that Chrome Web Store points to on the extension’s page.

{   "description": "Reddit made accessible for disabled users.",   "icons": {     "16": "images/logo/16.png",     "48": "images/logo/48.png",     "128": "images/logo/128.png"   },   "homepage_url": "https://lars.koelker.dev/extensions/tor/" }
  • The description is displayed on Chrome’s management page (chrome://extensions) and should be brief, less than 132 characters.
  • The icons are used in lots of places. As the docs state, it’s best to provide three versions of the same icon in different resolutions, preferably as a PNG file. Feel free to use the ones in the GitHub repository for this example.
  • The homepage_url can be used to connect your website with the extension. A button including the link will be displayed when clicking on “More details” on the management page.
Our opened extension card inside the extension management page.

Setting permissions

One major advantage extensions have is that their APIs allow you to interact directly with the browser. But we have to explicitly give the extension those permissions, which also goes inside the manifest.json file.

 {   "manifest_version": 3,   "name": "Transcribers of Reddit",   "version": "1.0",   "description": "Reddit made accessible for disabled users.",   "icons": {     "16": "images/logo/16.png",     "48": "images/logo/48.png",     "128": "images/logo/128.png"   },   "homepage_url": "https://lars.koelker.dev/extensions/tor/",    "permissions": [     "storage",     "webNavigation"   ] }

What did we just give this extension permission to? First, storage. We want this extension to be able to save the user’s settings, so we need to access the browser’s web storage to hold them. For example, if the user wants red borders on the comments, then we’ll save that for next time rather than making them set it again.

We also gave the extension permission to look at how the user navigated to the current screen. Reddit is a single-page application (SPA) which means it doesn’t trigger a page refresh. We need to “catch” this interaction, as Reddit will only load the comments of a post if we click on it. So, that’s why we’re tapping into webNavigation.

We’ll get to executing code on a page later as it requires a whole new entry inside manifest.json.

/explanation Depending on which permissions are allowed, the browser might display a warning to the user to accept the permissions. It’s only certain ones, though, and Chrome has a nice outline of them.

Managing translations

Browser extensions have a built-in internalization (i18n) API. It allows you to manage translations for multiple languages (full list). To use the API, we have to define our translations and default language right in the manifest.json file:

"default_locale": "en"

This sets English as the language. In the event that a browser is set to any other language that isn’t supported, the extension will fall back to the default locale (en in this example).

Our translations are defined inside the _locales directory. Let’s create another folder in there each language you want to support. Each subdirectory gets its own messages.json file.

src   └─ _locales      └─ en         └─ messages.json      └─ fr         └─ messages.json

A translation file consists of multiple parts:

  • Translation key (“id”): This key is used to reference the translation.
  • Message: The actual translation content
  • Description (optional): Describes the translation (I wouldn’t use them, they just bloat up the file and your translation key should be descriptive enough)
  • Placeholders (optional): Can be used to insert dynamic content inside a translation

Here’s an example that pulls all that together:

{   "userGreeting": { // Translation key ("id")     "message": "Good $ daytime$ , $ user$ !" // Translation     "description": "User Greeting", // Optional description for translators     "placeholders": { // Optional placeholders       "daytime": { // As referenced inside the message         "content": "$ 1",         "example": "morning" // Example value for our content       },       "user": {          "content": "$ 1",         "example": "Lars"       }     }   } }

Using placeholders is a bit more challenging. At first we need to define the placeholder inside the message. A placeholder needs to be wrapped in between $ characters. Afterwards, we have to add our placeholder to the “placeholder list.” This is a bit unintuitive, but Chrome wants to know what value should be inserted for our placeholders. We (obviously) want to use a dynamic value here, so we use the special content value $ 1 which references our inserted value.

The example property is optional. It can be used to give translators a hint what value the placeholder could be (but is not actually displayed).

We need to define the following translations for our extension. Copy and paste them into the messages.json file. Feel free to add more languages (e.g. if you speak German, add a de folder inside _locales, and so on).

{   "name": {     "message": "Transcribers of Reddit"   },   "description": {     "message": "Accessible image descriptions for subreddits."   },   "popupManageSettings": {     "message": "Manage settings"   },   "optionsPageTitle": {     "message": "Settings"   },   "sectionGeneral": {     "message": "General settings"   },   "settingBorder": {     "message": "Show comment border"   },   "settingBackground": {     "message": "Show comment background"   } }

You might be wondering why we registered the permissions when there is no sign of an i18n permission, right? Chrome is a bit weird in that regard, as you don’t need to register every permission. Some (e.g. chrome.i18n) don’t require an entry inside the manifest. Other permissions require an entry but won’t be displayed to the user when installing the extension. Some other permissions are “hybrid” (e.g. chrome.runtime), meaning some of their functions can be used without declaring a permission—but other functions of the same API require one entry in the manifest. You’ll want to take a look at the documentation for a solid overview of the differences.

Using translations inside the manifest

The first thing our end user will see is either the entry inside the Chrome Web Store or the extension overview page. We need to adjust our manifest file to make sure everything os translated.

{   // Update these entries   "name": "__MSG_name__",   "description": "__MSG_description__" }

Applying this syntax uses the corresponding translation in our messages.json file (e.g. _MSG_name_ uses the name translation).

Using translations in HTML pages

Applying translations in an HTML file takes a little JavaScript.

chrome.i18n.getMessage('name');

That code returns our defined translation (which is Transcribers of Reddit). Placeholders can be done in a similar way.

chrome.i18n.getMessage('userGreeting', {   daytime: 'morning',   user: 'Lars' });

It would be a pain in the butt to apply translations to all elements this way. But we can write a little script that performs the translation based on a data- attribute. So, let’s create a new js folder inside the src directory, then add a new util.js file in it.

src   └─ js      └─ util.js

This gets the job done:

const i18n = document.querySelectorAll("[data-intl]"); i18n.forEach(msg => {   msg.innerHTML = chrome.i18n.getMessage(msg.dataset.intl); });  chrome.i18n.getAcceptLanguages(languages => {   document.documentElement.lang = languages[0]; });

Once that script is added to an HTML page, we can add the data-intl attribute to an element to set its content. The document language will also be set based on the user language.

<!-- Before JS execution --> <html>   <body>     <button data-intl="popupManageSettings"></button>   </body> </html>

<!-- After JS execution --> <html lang="en">   <body>     <button data-intl="popupManageSettings">Manage settings</button>   </body> </html>

Adding a pop-up and options page

Before we dive into actual programming, we we need to create two pages:

  1. An options page that contains user settings
  2. A pop-up page that opens when interacting with the extension icon right next to our address bar. This page can be used for various scenarios (e.g. for displaying stats or quick settings).
The options page containg our settings.

The pop-up containg a link to the options page.

Here’s an outline of the folders and files we need in order to make the pages:

src   ├─ css  |    └─ paintBucket.css  ├─ popup  |    ├─ popup.html  |    ├─ popup.css  |    └─ popup.js  └─ options       ├─ options.html       ├─ options.css       └─ options.js

The .css files contain plain CSS, nothing more and nothing less. I won’t into detail because I know most of you reading this are already fully aware of how CSS works. You can copy and paste the styles from the GitHub repository for this project.

Note that the pop-up is not a tab and that its size depends on the content in it. If you want to use a fixed popup size, you can set the width and height properties on the html element.

Creating the pop-up

Here’s an HTML skeleton that links up the CSS and JavaScript files and adds a headline and button inside the <body>.

<!doctype html> <html lang="en">   <head>     <meta charset="UTF-8">     <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">     <meta http-equiv="X-UA-Compatible" content="ie=edge">     <title data-intl="name"></title>      <link rel="stylesheet" href="../css/paintBucket.css">     <link rel="stylesheet" href="popup.css">      <!-- Our "translation" script -->     <script src="../js/util.js" defer></script>     <script src="popup.js" defer></script>   </head>   <body>     <h1 id="title"></h1>     <button data-intl="popupManageSettings"></button>   </body> </html>

The h1 contains the extension name and version; the button is used to open the options page. The headline will not be filled with a translation (because it lacks a data-intl attribute), and the button doesn’t have any click handler yet, so we need to populate our popup.js file:

const title = document.getElementById('title'); const settingsBtn = document.querySelector('button'); const manifest = chrome.runtime.getManifest();  title.textContent = `$ {manifest.name} ($ {manifest.version})`;  settingsBtn.addEventListener('click', () => {   chrome.runtime.openOptionsPage(); });

This script first looks for the manifest file. Chrome offers the runtime API which contains the getManifest method (this specific method does not require the runtime permission). It returns our manifest.json as a JSON object. After we populate the title with the extension name and version, we can add an event listener to the settings button. If the user interacts with it, we will open the options page using chrome.runtime.openOptionsPage() (again no permission entry needed).

The pop-up page is now finished, but the extension doesn’t know it exists yet. We have to register the pop-up by appending the following property to the manifest.json file.

"action": {   "default_popup": "popup/popup.html",   "default_icon": {     "16": "images/logo/16.png",     "48": "images/logo/48.png",     "128": "images/logo/128.png"   } },

Creating the options page

Creating this page follows a pretty similar process as what we just completed. First, we populate our options.html file. Here’s some markup we can use:

<!doctype html> <html lang="en"> <head>   <meta charset="UTF-8">   <meta name="viewport" content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">   <meta http-equiv="X-UA-Compatible" content="ie=edge">   <title data-intl="name"></title>    <link rel="stylesheet" href="../css/paintBucket.css">   <link rel="stylesheet" href="options.css">    <!-- Our "translation" script -->   <script src="../js/util.js" defer></script>   <script src="options.js" defer></script> </head> <body>   <header>     <h1>       <!-- Icon provided by feathericons.com -->       <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round" role="presentation">         <circle cx="12" cy="12" r="3"></circle>         <path d="M19.4 15a1.65 1.65 0 0 0 .33 1.82l.06.06a2 2 0 0 1 0 2.83 2 2 0 0 1-2.83 0l-.06-.06a1.65 1.65 0 0 0-1.82-.33 1.65 1.65 0 0 0-1 1.51V21a2 2 0 0 1-2 2 2 2 0 0 1-2-2v-.09A1.65 1.65 0 0 0 9 19.4a1.65 1.65 0 0 0-1.82.33l-.06.06a2 2 0 0 1-2.83 0 2 2 0 0 1 0-2.83l.06-.06a1.65 1.65 0 0 0 .33-1.82 1.65 1.65 0 0 0-1.51-1H3a2 2 0 0 1-2-2 2 2 0 0 1 2-2h.09A1.65 1.65 0 0 0 4.6 9a1.65 1.65 0 0 0-.33-1.82l-.06-.06a2 2 0 0 1 0-2.83 2 2 0 0 1 2.83 0l.06.06a1.65 1.65 0 0 0 1.82.33H9a1.65 1.65 0 0 0 1-1.51V3a2 2 0 0 1 2-2 2 2 0 0 1 2 2v.09a1.65 1.65 0 0 0 1 1.51 1.65 1.65 0 0 0 1.82-.33l.06-.06a2 2 0 0 1 2.83 0 2 2 0 0 1 0 2.83l-.06.06a1.65 1.65 0 0 0-.33 1.82V9a1.65 1.65 0 0 0 1.51 1H21a2 2 0 0 1 2 2 2 2 0 0 1-2 2h-.09a1.65 1.65 0 0 0-1.51 1z"></path>       </svg>       <span data-intl="optionsPageTitle"></span>     </h1>   </header>    <main>     <section id="generalOptions">       <h2 data-intl="sectionGeneral"></h2>        <div id="generalOptionsWrapper"></div>     </section>   </main>    <footer>     <p>Transcribers of Reddit extension by <a href="https://lars.koelker.dev" target="_blank">lars.koelker.dev</a>.</p>     <p>Reddit is a registered trademark of Reddit, Inc. This extension is not endorsed or affiliated with Reddit, Inc. in any way.</p>   </footer> </body> </html>

There are no actual options yet (just their wrappers). We need to write the script for the options page. First, we define variables to access our wrappers and default settings inside options.js. “Freezing” our default settings prevents us from accidentally modifying them later.

const defaultSettings = Object.freeze({   border: false,   background: false, }); const generalSection = document.getElementById('generalOptionsWrapper'); 

Next, we need to load the saved settings. We can use the (previously registered) storage API for that. Specifically, we need to define if we want to store the data locally (chrome.storage.local) or sync settings through all devices the end user is logged in to (chrome.storage.sync). Let’s go with local storage for this project.

Retrieving values needs to be done with the get method. It accepts two arguments:

  1. The entries we want to load
  2. A callback containing the values

Our entries can either be a string (e.g. like settings below) or an array of entries (useful if we want to load multiple entries). The argument inside the callback function contains an object of all entries we previously defined in { settings: ... }:

chrome.storage.local.get('settings', ({ settings }) => {   const options = settings ?? defaultSettings; // Fall back to default if settings are not defined   if (!settings) {     chrome.storage.local.set({      settings: defaultSettings,     });  }    // Create and display options   const generalOptions = Object.keys(options).filter(x => !x.startsWith('advanced'));      generalOptions.forEach(option => createOption(option, options, generalSection)); });

To render the options, we also need to create a createOption() function.

function createOption(setting, settingsObject, wrapper) {   const settingWrapper = document.createElement("div");   settingWrapper.classList.add("setting-item");   settingWrapper.innerHTML = `   <div class="label-wrapper">     <label for="$ {setting}" id="$ {setting}Desc">       $ {chrome.i18n.getMessage(`setting$ {setting}`)}     </label>   </div>    <input type="checkbox" $ {settingsObject[setting] ? 'checked' : ''} id="$ {setting}" />   <label for="$ {setting}"     tabindex="0"     role="switch"     aria-checked="$ {settingsObject[setting]}"     aria-describedby="$ {setting}-desc"     class="is-switch"   ></label>   `;    const toggleSwitch = settingWrapper.querySelector("label.is-switch");   const input = settingWrapper.querySelector("input");    input.onchange = () => {     toggleSwitch.setAttribute('aria-checked', input.checked);     updateSetting(setting, input.checked);   };    toggleSwitch.onkeydown = e => {     if(e.key === " " || e.key === "Enter") {       e.preventDefault();       toggleSwitch.click();     }   }    wrapper.appendChild(settingWrapper); }

Inside the onchange event listener of our switch (aká radio button) we call the function updateSetting. This method will write the updated value of our radio button inside the storage.

To accomplish this, we will make use of the set function. It has two arguments: The entry we want to overwrite and an (optional) callback (which we don’t use in our case). As our settings entry is not a boolean or a string but an object containing different settings, we use the spread operator () and only overwrite our actual key (setting) inside the settings object.

function updateSetting(key, value) {   chrome.storage.local.get('settings', ({ settings }) => {     chrome.storage.local.set({       settings: {         ...settings,         [key]: value       }     })   }); }

Once again, we need to “inform” the extension about our options page by appending the following entry to the manifest.json:

"options_ui": {   "open_in_tab": true,   "page": "options/options.html" },

Depending on your use case you can also force the options dialog to open as a popup by setting open_in_tab to false.

Installing the extension for development

Now that we’ve successfully set up the manifest file and have added both the pop-up and options page to the mix, we can install our extension to check if our pages actually work. Navigate to chrome://extensions and enable “Developer mode.” Three buttons will appear. Click the one labeled “Load unpacked” and select the src folder of your extension to load it up.

The extension should now be successfully installed and our “Transcribers of Reddit” tile should be on the page.

We can already interact with our extension. Click on the puzzle piece (🧩) icon right next to the browser’s address bar and click on the newly-added “Transcribers of Reddit” extension. You should now be greeted by a small pop-up with the button to open the options page.

Lovely, right? It might look a bit different on your device, as I have dark mode enabled in these screenshots.

If you enable the “Show comment background” and “Show comment border” settings, then reload the page, the state will persist because we’re saving it in the browser’s local storage.

Adding the content script

OK, so we can already trigger the pop-up and interact with the extension settings, but the extension doesn’t do anything particularly useful yet. To give it some life, we will add a content script.

Add a file called comment.js inside the js directory and make sure to define it in the manifest.json file:

"content_scripts": [   {     "matches": [ "*://www.reddit.com/*" ],     "js": [ "js/comment.js" ]   } ],

The content_scripts is made up of two parts:

  • matches: This array holds URLs that tell the browser where we want our content scripts to run. Being an extension for Reddit and all, we want this to run on any page matching ://www.redit.com/*, where the asterisk is a wild card to match anything after the top-level domain.
  • js: This array contains the actual content scripts.

Content scripts can’t interact with other (normal) JavaScripts. This means if a website’s scripts defines a variable or function, we can’t access it. For example:

// script_on_website.js const username = 'Lars';  // content_script.js console.log(username); // Error: username is not defined

Now let’s start writing our content script. First, we add some constants to comment.js. These constants contain RegEx expressions and selectors that will be used later on. The CommentUtils is used to determine whether or not a post contains a “tor comment,” or if comment wrappers exists.

const messageTypes = Object.freeze({   COMMENT_PAGE: 'comment_page',   SUBREDDIT_PAGE: 'subreddit_page',   MAIN_PAGE: 'main_page',   OTHER_PAGE: 'other_page', });  const Selectors = Object.freeze({   commentWrapper: 'div[style*="--commentswrapper-gradient-color"] > div, div[style*="max-height: unset"] > div',   torComment: 'div[data-tor-comment]',   postContent: 'div[data-test-id="post-content"]' });  const UrlRegex = Object.freeze({   commentPage: //r/.*/comments/.*/,   subredditPage: //r/.*// });  const CommentUtils = Object.freeze({   isTorComment: (comment) => comment.querySelector('[data-test-id="comment"]') ? comment.querySelector('[data-test-id="comment"]').textContent.includes('m a human volunteer content transcriber for Reddit') : false,   torCommentsExist: () => !!document.querySelector(Selectors.torComment),   commentWrapperExists: () => !!document.querySelector('[data-reddit-comment-wrapper="true"]') });

Next, we check whether or not a user directly opens a comment page (“post”), then perform a RegEx check and update the directPage variable. This case occurs when a user directly opens the URL (e.g. by typing it into the address bar or clicking on<a> element on another page, like Twitter).

let directPage = false; if (UrlRegex.commentPage.test(window.location.href)) {   directPage = true;   moveComments(); }

Besides opening a page directly, a user normally interacts with the SPA. To catch this case, we can add a message listener to our comment.js file by using the runtime API.

chrome.runtime.onMessage.addListener(msg => {   if (msg.type === messageTypes.COMMENT_PAGE) {     waitForComment(moveComments);   } });

All we need now are the functions. Let’s create a moveComments() function. It moves the special “tor comment” to the start of the comment section. It also conditionally applies a background color and border (if borders are enabled in the settings) to the comment. For this, we call the storage API and load the settings entry:

function moveComments() {   if (CommentUtils.commentWrapperExists()) {     return;   }    const wrapper = document.querySelector(Selectors.commentWrapper);   let comments = wrapper.querySelectorAll(`$ {Selectors.commentWrapper} > div`);   const postContent = document.querySelector(Selectors.postContent);    wrapper.dataset.redditCommentWrapper = 'true';   wrapper.style.flexDirection = 'column';   wrapper.style.display = 'flex';    if (directPage) {     comments = document.querySelectorAll("[data-reddit-comment-wrapper='true'] > div");   }    chrome.storage.local.get('settings', ({ settings }) => { // HIGHLIGHT 18     comments.forEach(comment => {       if (CommentUtils.isTorComment(comment)) {         comment.dataset.torComment = 'true';         if (settings.background) {           comment.style.backgroundColor = 'var(--newCommunityTheme-buttonAlpha05)';         }         if (settings.border) {           comment.style.outline = '2px solid red';         }         comment.style.order = "-1";         applyWaiAria(postContent, comment);       }     });   }) }

The applyWaiAria() function is called inside the moveComments() function—it adds aria- attributes. The other function creates a unique identifier for use with the aria- attributes.

function applyWaiAria(postContent, comment) {   const postMedia = postContent.querySelector('img[class*="ImageBox-image"], video');   const commentId = uuidv4();    if (!postMedia) {     return;   }    comment.setAttribute('id', commentId);   postMedia.setAttribute('aria-describedby', commentId); }  function uuidv4() {   return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {     var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);     return v.toString(16);   }); }

The following function waits for the comments to load and calls the callback parameter if it finds the comment wrapper.

function waitForComment(callback) {   const config = { childList: true, subtree: true };   const observer = new MutationObserver(mutations => {     for (const mutation of mutations) {       if (document.querySelector(Selectors.commentWrapper)) {         callback();         observer.disconnect();         clearTimeout(timeout);         break;       }     }   });    observer.observe(document.documentElement, config);   const timeout = startObservingTimeout(observer, 10); }  function startObservingTimeout(observer, seconds) {   return setTimeout(() => {     observer.disconnect();   }, 1000 * seconds); }

Adding a service worker

Remember when we added a listener for messages inside the content script? This listener isn’t currently receiving messages. We need to send it to the content script ourselves. For this purpose we need to register a service worker.

We have to register our service worker inside the manifest.json by appending the following code to it:

"background": {   "service_worker": "sw.js" }

Don’t forget to create the sw.js file inside the src directory (service workers always need to be created in the extension’s root directory, src.

Now, let’s create some constants for the message and page types:

const messageTypes = Object.freeze({   COMMENT_PAGE: 'comment_page',   SUBREDDIT_PAGE: 'subreddit_page',   MAIN_PAGE: 'main_page',   OTHER_PAGE: 'other_page', });  const UrlRegex = Object.freeze({   commentPage: //r/.*/comments/.*/,   subredditPage: //r/.*// });  const Utils = Object.freeze({   getPageType: (url) => {     if (new URL(url).pathname === '/') {       return messageTypes.MAIN_PAGE;     } else if (UrlRegex.commentPage.test(url)) {       return messageTypes.COMMENT_PAGE;     } else if (UrlRegex.subredditPage.test(url)) {       return messageTypes.SUBREDDIT_PAGE;     }      return messageTypes.OTHER_PAGE;   } });

We can add the service worker’s actual content. We do this with an event listener on the history state (onHistoryStateUpdated) that detects when a page has been updated with the History API (which is commonly used in SPAs to navigate without a page refresh). Inside this listener, we query the active tab and extract its tabId. Then we send a message to our content script containing the page type and URL.

chrome.webNavigation.onHistoryStateUpdated.addListener(async ({ url }) => {   const [{ id: tabId }] = await chrome.tabs.query({ active: true, currentWindow: true });    chrome.tabs.sendMessage(tabId, {     type: Utils.getPageType(url),     url   }); });

All done!

We’re finished! Navigate to Chrome’s extension management page ( chrome://extensions) and hit the reload icon on the unpacked extension. If you open a Reddit post that contains a “Transcribers of Reddit” comment with an image transcription (like this one), it will be moved to the start of the comment section and be highlighted as long as we’ve enabled it in the extension settings.

The “Transcribers of Reddit” extension highlights a particular comment by moving it to the top of the Reddit thread’s comment list and giving it a bright red border

Conclusion

Was that as hard as you thought it would be? It’s definitely a lot more straightforward than I thought before digging in. After setting up the manifest.json and creating any page files and assets we need, all we’re really doing is writing HTML, CSS, and JavaScript like normal.

If you ever find yourself stuck along the way, the Chrome API documentation is a great resource to get back on track.

Once again, here’s the GitHub repo with all of the code we walked through in this article. Read it, use it, and let me know what you think of it!


How to Create a Browser Extension originally published on CSS-Tricks

CSS-Tricks

, ,
[Top]

The Many Faces of VS Code in the Browser

VS Code is built from web technologies (HTML, CSS, and JavaScript), but dare I say today it’s mostly used a local app that’s installed on your machine. That’s starting to shift, though, as there has been an absolute explosion of places VS Code is becoming available to use on the web. I’d say it’s kind of a big deal, as VS Code isn’t just some editor; it’s the predominant editor used by web developers. Availability on the web means being able to use it without installing software, which is significant for places, like schools, where managing all that is a pain, and computers, like Chromebooks, where you don’t really install local software at all.

It’s actually kind of confusing all the different places this shows up, so let’s look at the landscape as I see it today.

vscode.dev

It was just a few weeks ago as I write that Microsoft dropped vscode.dev. Chris Dias:

Modern browsers that support the File System Access API (Edge and Chrome today) allow web pages to access the local file system (with your permission). This simple gateway to the local machine quickly opens some interesting scenarios for using VS Code for the Web as a zero-installation local development tool

It’s just Edge and Chrome that have this API right now, but even if you can’t get it, you can still upload files, or perhaps more usefully, open a repo. If it does work, it’s basically… VS Code in the browser. It can open your local folders and it behaves largely just like your local VS Code app does.

I haven’t worked a full day in it or anything, but basic usage seems about the same. There is some very explicit permission-granting you have to do, and keyboard commands are a bit weird as you’re having to fight the browsers keyboard commands. Plus, there is no working terminal.

Other than that it feels about the same. Even stuff like “Find in Project” seems just as fast as local, even on large sites.

GitHub.dev: The whole “Press Period (.) on any GitHub Repo” Thing

You also get VS Code in the browser if you go to github.dev, but it’s not quite wired up the same.

You don’t have the opportunity here to open a local folder. Instead, you can quickly look at a GitHub repo.

But perhaps even more notably, you can make changes, save the files, then use the Source Control panel right there to commit the code or make a pull request.

You’d think vscode.dev and github.dev would merge into one at some point, but who knows.

Oh and hey, thinking of this in reverse, you can open GitHub repos on your locally installed VS Code as well directly (even without cloning it).

There is no terminal or preview in those first two, but there is with GitHub Codespaces.

GitHub Codespaces is also VS Code in the browser, but fancier. For one thing, you’re auth’d into Microsoft-land while using it, meaning it’s running all your VS Code extensions that you run locally. But perhaps a bigger deal is that you get a working terminal. When you spin it up, you see:

Welcome to Codespaces! You are on our default image.

• It includes runtimes and tools for Python, Node.js, Docker, and more. See the full list here: https://aka.ms/ghcs-default-image
• Want to use a custom image instead? Learn more here: https://aka.ms/configure-codespace

🔍 To explore VS Code to its fullest, search using the Command Palette (Cmd/Ctrl + Shift + P or F1).

📝 Edit away, run your app as usual, and we’ll automatically make it available for you to access.

On a typical npm-powered project, you can npm run you scripts and you’ll get a URL running the project as a preview.

This is in the same territory as Gitpod.

Gitpod is a lot like GitHub CodeSpaces in that it’s VS Code in the browser but with a working terminal. That terminal is like a full-on Docker/Linux environment, so you’ve got a lot of power there. It might even be able to mimic your production environment, assuming you’re using all things that Gitpod supports.

It’s also worth noting that Gitpod jacks in “workspaces” that run services. On that demo project above, a MongoDB instance is running on one port, and a preview server is open on another port, which you can see in a mock browser interface there. Being able to preview the project you’re working on is an absolute must and they are handling that elegantly.

Perhaps you’ll remember we used Gitpod in a video we did about DataStax Astra (jumplink) which worked out very nicely.

My (absolute) guess is that Gitpod could be acquired by Microsoft. It seems like Microsoft is headed in this exact direction and getting bought is certainly better than getting steamrolled by the company that makes the core tech that you’re using. You gotta play the “no—this is good! it validates the market! we baked you an awkward cake!” for a while but I can’t imagine it ends well.

This is also a lot like CodeSandbox or Stackblitz.

Straight up, CodeSandbox and Stackblitz also run VS Code in the browser. Or… something that leverages bits and bobs of VS Code at least (a recent episode of Syntax gets into the StackBlitz approach a bit).

You can also install VS Code on your own server.

That’s what Coder’s code-server ia. So, rather than use someone else’s web version of VS Code, you use your own.

You could run VS Code on a local server, but I imagine the big play here is that you run it on live cloud web servers you control. Maybe servers, you know, with your code running on them, so you can use this to literally edit the files on the server. Who needs VIM when you have full-blown VS Code (lolz).

We talked about the school use case, and I imagine this is compelling for that as well, since the school might not even rely on a third-party service, but host it themselves. The iPad/Chromebook use cases are relevant here, too, and perhaps even more so. The docs say “Preserve battery life when you’re on the go; all intensive tasks run on your server,” which I guess means that unlike vscode.dev where tasks like “Find in Project” are (presumably) done on your local machine, they are done by the server (maybe slower, but not slower than a $ 200 laptop?).


There is clearly something in the water with all this. I’m bullish on web-based IDEs. Just look at what’s happening with Figma (kicking ass), which I would argue is one-third due to the product meetings screen designers need with little bloat, one-third due to the simple team and permissions model, and one-third due to the fact that it’s built web-first.


The post The Many Faces of VS Code in the Browser appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Chapter 10: Browser Wars

In June of 1995, representatives from Microsoft arrived at the Netscape offices. The stated goal was to find ways to work together—Netscape as the single dominant force in the browser market and Microsoft as a tech giant just beginning to consider the implications of the Internet. Both groups, however, were suspicious of ulterior motives.

Marc Andreessen was there. He was already something of a web celebrity. Newly appointed Netscape CEO James Barksdale also came. On the Microsoft side was a contingent of product managers and engineers hoping to push Microsoft into the Internet market.

The meeting began friendly enough, as the delegation from Microsoft shared what they were working on in the latest version of their operating system, Windows 95. Then, things began to sour.

According to accounts from Netscape, “Microsoft offered to make an investment in Netscape and give Netscape’s software developers crucial technical information about the Windows operating system if Netscape would agree not to make a browser for [the] Windows 95 operating system.” If that was to be believed, Microsoft would have tiptoed over the line of what is legal. The company would be threatening to use its monopoly to squash competition.

Andreessen, no stranger to dramatic flair, would later dress the meeting up with a nod to The Godfather in his deposition to the Department of Justice: “I expected to find a bloody computer monitor in my bed the next day.”

Microsoft claimed the meeting was a “setup,” initiated by Netscape to bait them into a comprising situation they could turn to their advantage later.

There are a few different places to mark the beginning of the browser wars. The release of Internet Explorer 1, for instance (late summer, 1995). Or the day Andreessen called out Microsoft as nothing but a “poorly debugged set of device drivers” (early 1995). But June 21, 1995—when Microsoft and Netscape came to a meeting as conceivable friends and left as bitter foes—may be the most definitive.


Andreessen called it “free, but not free.”

Here’s how it worked. When the Netscape browser was released it came with fee of $ 39 per copy. That was officially speaking. But fully function Netscape beta versions were free to download for their website. And universities and non-profits could easily get zero-cost licenses.

For the upstarts of the web revolution and open source tradition, Netscape was free enough. For the buttoned-up corporations buying in bulk with specific contractual needs, they could license the software for a reasonable fee. Free, but not free. “It looks free optically, but it is not,” a Netscape employee would later describe it. “Corporations have to pay for it. Maintenance has to be paid.”

“It’s basically a Microsoft lesson, right?” was how Andreessen framed it. “If you get ubiquity, you have a lot of options, a lot of ways to benefit from that.” If people didn’t have a way to get quick and easy access to Netscape, it would never spread. It was a lesson Andreessen had learned behind his computer terminal at the NCSA research lab at the University of Illinois. Just a year prior, he and his friends built the wildly successful, cross-platform Mosaic browser.

Andreessen worked on Mosaic for several years in the early ’90’s. But he began to feel cramped by increasing demands from higher-ups at NCSA hoping to capitalize on the browser’s success. At the end of 1993, Andreessen headed west, to stake his claim in Silicon Valley. That’s where he met James Clark.

Netscape Communications Corporation co-founders Jim Clark, left, and Marc Andreessen (AP Photo/HO)

Clark had just cut ties with Silicon Graphics, the company he created. A legend in the Bay Area, Clark was well known in the valley. When he saw the web for the first time, someone suggested he meet with Andreessen. So he did. The two hit it off immediately.

Clark—with his newly retired time and fortune—brought an inner circle of tech visionaries together for regular meetings. “For the invitees, it seemed like a wonderful opportunity to talk about ideas, technologies, strategies,” one account would later put it. “For Clark, it was the first step toward building a team of talented like-minded people who populate his new company.” Andreessen, still very much the emphatic and relentless advocate of the web, increasingly moved to the center of this circle.

The duo considered several ideas. None stuck. But they kept coming back to one. Building the world’s first commercial browser.

And so, on a snowy day in mid-April 1994, Andreessen and Clark took a flight out to Illinois. They were there with a single goal: Hire the members of the original Mosaic team still working at the NCSA lab for their new company. They went straight to the lobby of a hotel just outside the university. One by one, Clark met with five of the people who had helped create Mosaic (plus Lou Montulli, creator of Lynx and a student at University of Kansas) and offered them a job.

Right in a hotel room, Clark printed out contracts with lucrative salaries and stock options. Then he told them the mission of his new company. “Its mandate—Beat Mosaic!—was clear,” one employee recalled. By the time Andreessen and Clark flew back to California the next day, they’d have the six new employees of the soon-to-be-named Netscape.

Within six months they would release their first browser—Netscape Navigator. Six months after that, the easy-to-use, easy-to-install browser, would overrun the market and bring millions of users online for the first time.

Clark, speaking to the chaotic energy of the browser team and the speed at which they built software that changed the world, would later say Netscape gave “anarchy credibility.” Writer John Cassidy puts that into context. “Anarchy in the post-Netscape sense meant that a group of college kids could meet up with a rich eccentric, raise some money from a venture capitalist, and build a billion-dollar company in eighteen months,” adding, “Anarchy was capitalism as personal liberation.”


Inside of Microsoft were a few restless souls.

The Internet, and the web, was passing the tech giant by. Windows was the most popular operating system in the world—a virtual monopoly. But that didn’t mean they weren’t vulnerable.

As early as 1993, three employees at Microsoft—Steven Sinofsky, J. Allard, and Benjamin Slivka—began to sound the alarms. Their uphill battle to make Microsoft realize the promise of the Internet is documented in the “Inside Microsoft” profile penned by Kathy Rebell, which published in Bloomberg in 1996. “I dragged people into my office kicking and screaming,” Sinofsky told Rebello, “I got people excited about this stuff.”

Some employees believed Microsoft was distracted by a need to control the network. Investment poured into a proprietary network, like CompuServe or Prodigy, called the Microsoft Network (or MSN). Microsoft wanted to control the entire networked experience. But MSN would ultimately be a huge failure.

Slivka and Allard believed Microsoft was better positioned to build with the Internet rather than compete against it. “Microsoft needs to ensure that we ride the success of the Web, instead of getting drowned by it,” wrote Slivka in some of his internal communication.

Allard went a step further, drafting an internal memo named “Windows: The Next Killer Application for the Internet.” Allard’s approach, laid out in the document, would soon be the cornerstone of Microsoft’s Internet strategy. It consisted of three parts. First, embrace the open standards of the web. Second, extend its technology to the Microsoft ecosystem. Finally (and often forgotten), innovate and improve web technologies.

After a failed bid to acquire BookLink’s InternetWorks browser in 1994—AOL swooped in and outbid them—Microsoft finally got serious about the web. And their meeting with Netscape didn’t yield any results. Instead, they negotiated a deal with NCSA’s commercial partner Spyglass to license Mosaic for the first Microsoft browser.

In August of 1995, Microsoft released Internet Explorer version 1.0. It wasn’t very original, based on code that Spyglass had licensed to dozens of other partners. Shipped as part of an Internet Jumpstart add-on, the browser was bare-bones, clunkier and harder to use than what Netscape offered.

Source: Web Design Museum

On December 7th, Bill Gates hosted a large press conference on the anniversary of Pearl Harbor. He opened with news about the Microsoft Network, the star of the show. But he also demoed Internet Explorer, borrowing language directly from Allard’s proposal. “So the Internet, the competition will be kind of, once again, embrace and extend,” Gates announced, “And we will embrace all the popular Internet protocols… We will do some extensions to those things.”

Microsoft had entered the market.


Like many of her peers, Rosanne Siino began learning the world of personal computing on her own. After studying English in college—with an eye towards journalism—Siino found herself at a PR firm with clients like Dell and Seagate. Siino was naturally curious and resourceful, and read trade magazines and talked to engineers to learn what she could about personal computing in the information age.

She developed a special talent for taking the language and stories of engineers and translating them into bold visions of the future. Friendly, and always engaging, Siino built up a Rolodex of trade publication and general media contacts along the way.

After landing a job at Silicon Graphics, Siino worked closely with James Clark (he would later remark she was “one of the best PR managers at SGI”). She identified with Clark’s restlessness when he made plans to leave the company—an exit she helped coordinate—and decided if the opportunity came to join his new venture, she’d jump ship.

A few months later, she did. Siino was employee number 19 at Netscape; its first public relations hire.

When Siino arrived at the brand new Netscape offices in Mountain View, the first thing she did was sit down and talk to each one of the engineers. She wanted to hear—straight from the source—what the vision of Netscape was. She heard a few things. Netscape was building a “killer application,” one that would make other browsers irrelevant. They had code that was better, faster, and easier to use than anything out there.

Siino knew she couldn’t sell good code. But a young and hard working group of fresh-out-of-college transplants from rural America making a run at entrenched Silicon Valley; that was something she could sell. “We had this twenty-two-year-old kid who was pretty damn interesting and I thought, ‘There’s a story right there,’” she later said in an interview for the book Architects of the Web, “‘And we had this crew of kids who had come out from Illinois and I thought, ‘There’s a story there too.’”

Inside of Netscape, some executives and members of the board had been talking about an IPO. With Microsoft hot on their heels, and competitor Spyglass launching a successful IPO of their own, timing was critical. “Before very long, Microsoft was sure to attack the Web browser market in a more serious manner,” writer John Cassidy explains, “If Netscape was going to issue stock, it made sense to do so while the competition was sparse.” Not to mention, a big, flashy IPO was just what the company needed to make headlines all around the country.

In the months leading up to the IPO, Siino crafted a calculated image of Andreeseen for the press. She positioned him as a leader of the software generation, an answer to the now-stodgy, Silicon-driven hardware generation of the 60’s and 70’s. In interviews and profiles, Siino made sure Andreeseen came off as a whip-smart visionary ready to tear down the old ways of doing things; the “new Bill Gates.”

That required a fair bit of cooperation from Andreeseen. “My other real challenge was to build up Marc as a persona,” she would later say. Sometimes, Andreessen would complain about the interviews, “but I’d be like, ‘Look, we really need to do this.’ And he’s savvy in that way. He caught on.’” Soon, it was almost natural, and as Andreeseen traveled around with CEO James Barksdale to talk to potential investors ahead of their IPO, Netscape hype continued to inflate.

August 9, 1995, was the day of the Netscape IPO. Employees buzzed around the Mountain View offices, too nervous to watch the financial news beaming from their screens or the TV. “It was like saying don’t notice the pink elephant dancing in your living room,” [Siino said later]. They shouldn’t have worried. In its first day of trading, the Netscape stock price rose 108%. It was best opening day for a stock on Wall Street. Some of the founding employees went to bed that night millionaires.

Not long after, Netscape released version 2 of their browser. It was their most ambitious release to date. Bundled in the software were tools for checking email, talking with friends, and writing documents. It was sleek and fast. The Netscape homepage that booted up each time the software started sported all sorts of nifty and well-known web adventures.

Not to mention JavaScript. Netscape 2 was the first version to ship with Java applets, small applications run directly in the browser. With Java, Netscape aimed to compete directly with Microsoft and their operating system.

To accompany the release, Netscape recruited young programmer Brendan Eich to work on a scripting language that riffed on Java. The result was JavaScript. Ecih created the first version in 10 days as a way for developers to make pages more interactive and dynamic. It was primitive, but easy to grasp, and powerful. Since then, it has become one of the most popular programming languages in the world.

Microsoft wasn’t far behind. But Netscape felt confident. They had pulled off the most ambitious product the web had ever seen. “In a fight between a bear and an alligator, what determines the victor is the terrain,” Andreessen said in an interview from the early days of Netscape. “What Microsoft just did was move into our terrain”


There’s an old adage at Microsoft, that it never gets something right until version 3.0. It was true even of their flagship product, Windows, and has notoriously been true of its most famous applications.

The first version of Internet Explorer was a rushed port of the Mosaic code that acted as little more than a a public statement that Microsoft was going into the browser business. The second version, released just after Netscape’s IPO in late 1995, saw rapid iteration, but lagged far behind. With Internet Explorer 3, Microsoft began to get the browser right.

Microsoft’s big, showy press conference hyped Internet Explorer as a true market challenger. Behind the scenes, it operated more like a skunkworks experiment. Six people were on the original product team. In a company of tens of thousands. “A bit like the original Mac team, the IE team felt like the vanguard of Microsoft,” one-time Internet Explorer lead Brad Silverberg would later say, “the vanguard of the industry, fighting for its life.”

That changed quickly. Once Microsoft recognized the potential of the web, they shifted their weight to it. In Speeding the Net, a comprehensive account of the rise of Netscape and its fall at the hands of Microsoft, authors Josh Quittner and Michelle Slatall, describe the Microsoft strategy. “In a way, the quality of it didn’t really matter. If the first generation flopped, Gates could assign a team of his best and brightest programmers to write an improved model. If that one failed too, he could hire even better programmers and try again. And again. And again. He had nearly unlimited resources.”

By version 3, the Internet Explorer team had a hundred people on it (including Chris Wilson of the original NCSA Mosaic team). That number would reach the thousands in a few short years. The software rapidly closed the gap. Internet Explorer introduced features that had given Netscape an edge—and even introduced their own HTML extensions, dynamic animation tools for developers, and rudimentary support of CSS.

In the summer of 1996, Walt Mossberg talked up Microsoft’s browsers. Only months prior he had labeled Netscape Navigator the “clear victor.” But he was beginning to change his mind. “I give the edge, however, to Internet Explorer 3.0,” he wrote upon Microsoft’s version 3. “It’s a better browser than Navigator 3.0 because it is easier to use and has a cleaner, more flexible user interface.”

Microsoft Internet Explorer 3.0.01152

Netscape Navigator 3.04

Still, most Microsoft executives knew that competing on features would never be enough. In December of 1996, senior VP James Allchin emailed his boss, Paul Maritz. He laid out the current strategy, an endless chase after Netscape’s feature set. “I don’t understand how IE is going to win,” Allchin conceded, “My conclusion is that we must leverage Windows more.” In the same email, he added, “We should think first about an integrated solution — that is our strength.” Microsoft was not about to simply lie down and allow themselves to be beaten. They focused on two things: integration with Windows and wider distribution.

When it was released, Internet Explorer 4 was more tightly integrated with the operating system than any previous version; an almost inseparable part of the Windows package. It could be used to browse files and folders. Its “push” technology let you stream the web, even when you weren’t actively using the software. It used internal APIs that were unavailable to outside developers to make the browser faster, smoother, and readily available.

And then there was distribution. Days after Netscape and AOL shook on a deal to include their browser on the AOL platform, AOL abruptly changed their mind and when with Internet Explorer instead. It would later be revealed that Microsoft had made them, as one writer put it (extending The Godfather metaphor once more), an “offer they couldn’t refuse.” Microsoft had dropped their prices down to the floor and—more importantly—promised AOL precious real estate pre-loaded on the desktop of every copy of the next Windows release.

Microsoft fired their second salvo with Compaq. Up to that point, all Compaq computers had shipped with Netscape pre-installed on Windows. When Windows threatened to suspend their license to use Windows at all (which was revealed later in court documents), that changed to Internet Explorer too.

By the time Windows ’98 was released, Internet Explorer 4 came already installed, free for every user, and impossible to remove.


“Mozilla!” interjected Jamie Zawinski. He was in a meeting at the time, which now rang in deafening silence for just a moment. Heads turned. Then, they kept going.

This was early days at Netscape. A few employees from engineering and marketing huddled together to try to come up with a name for the thing. One employee suggested they were going to crush Mosaic, like a bug. Zawinski—with a dry, biting humor he was well known for—thought Mozilla, “as in Mosaic meets Godzilla.”

Eventually, marketer Greg Sands settled on Netscape. But around the office, the browser was, from then on, nicknamed Mozilla. Early marketing materials on the web even featured a Mozilla inspired mascot, a green lizard with a know-it-all smirk, before they shelved it for something more professional.

Credit: Dave Titus

Credit: Dave Titus
Credit: Dave Titus

It would be years before the name would come back in any public way; and Zawinski would have a hand in that too.

Zawinski had been with Netscape since almost the beginning. He was employee number 20, brought in right after Rosanne Siino, to replace the work that Andreessen had done at NCSA working on the flagship version of Netscape for X-Windows. By the time he joined, he already had something of a reputation for solving complex technical challenges.

Jaime Zawinski

Zawinski’s earliest memories of programming date back to eighth grade. In high school, he was a terrible student. But he still managed to get a job after school as a programmer, working on the one thing that managed to keep him interested: code. After that, he started work for the startup Lucid, Inc., which boasted a strong pedigree of programming legends at its helm. Zawinski worked on the Common Lisp programming language and the popular IDE Emacs; technologies revered in the still small programming community. By virtue of his work on the projects, Zawinski had instant credibility among the tech elite.

At Netscape, the engineering team was central to the way things worked. It was why Siino had chosen to meet with members of that team as soon as she began, and why she crafted the story of Netscape around the way they operated. The result was a high-pressure, high-intensity atmosphere so indispensable company that it would become party of the companies mythology. They moved so quickly that many began to call such a rapid pace of development “Netscape Time.”

“It was really a great environment. I really enjoyed it,” Zawinski would later recall. “Because everyone was so sure they were right, we fought constantly but it allowed us to communicate fast.” But tempers did flare (one article details a time when he threw a chair against the wall and left abruptly for two weeks after his computer crashed), and many engineers would later reflect on the toxic workplace. Zawinski once put it simply: “It wasn’t healthy.”

Still, engineers had a lot of sway at the organization. Many of them, Zawinski included, were advocates of free software. “I guess you can say I’ve been doing free software since I’ve been doing software,” he would later say in an interview. For Zawinski, software was meant to be free. From his earliest days on the Netscape project, he advocated for a more free version of the browser. He and others on the engineering team were at least partly responsible for the creative licensing that went into the company’s “free, but not free” business model.

In 1997, technical manager Frank Hecker breathed new life into the free software paradigm. He wrote a 30-page whitepaper proposing what several engineers had wanted for years—to release the entire source of the browser for free. “The key point I tried to make in the document,” Hecker asserted, “was that in order to compete effectively Netscape needed more people and companies working with Netscape and invested in Netscape’s success.”

With the help of CTO Eric Hahn, Hecker and Zawinski made their case all the way to the top. By the time they got in the room with James Barksdale, most of the company had already come around to the idea. Much to everyone’s surprise, Barksdale agreed.

On January 23, 1998, Netscape made two announcements. The first everyone expected. Netscape had been struggling to compete with Microsoft for nearly a year. The most recent release of Internet Explorer version 4, bundled directly into the Windows operating system for free, was capturing ever larger portions of their market share. So Netscape announced it would be giving its browser away for free too.

The next announcement came as a shock. Netscape was going open source. The browser’s entire source code—millions of lines of code—would be released to the public and open to contributions from anybody in the world. Led by Netscape veterans like Michael Toy, Tara Hernandez, Scott Collins, and Jamie Zawinski, the team would have three months to excise the code base and get it ready for public distribution. The effort had a name too: Mozilla.

Firefox 1.0 (Credit: Web Design Museum)

On the surface, Netscape looked calm and poised to take on Microsoft with the force of the open source community at their wings. Inside the company, things looked much different. The three months that followed were filled with frenetic energy, close calls, and unparalleled pace. Recapturing the spirit of the earliest days of innovation at Netscape, engineers worked frantically to patch bugs and get the code ready to be released to the world. In the end, they did it, but only by the skin of their teeth.

In the process, the project spun out into an independent organization under the domain Mozilla.org. It was staffed entirely by Netscape engineers, but Mozilla was not technically a part of Netscape. When Mozilla held a launch party in April of 1998, just months after their public announcement, it didn’t just have Netscape members in attendance.

Zawinski had organized the party, and he insisted that a now growing community of people outside the company who had contributed to the project be a part of it. “We’re giving away the code. We’re sharing responsibility for development of our flagship product with the whole net, so we should invite them to the party as well,” he said, adding, “It’s a new world.”


On the day of his testimony in November of 1998, Steve McGeady sat, as one writer described, “motionless in the witness box.” He had been waiting for this moment for a long time; the moment when he could finally reveal, in his view, the nefarious and monopolist strain that coursed through Microsoft.

The Department of Justice had several key witnesses in their antitrust case against Microsoft, but McGeady was a linchpin. As Vice President at Intel, McGeady had regular dealings with Microsoft; and his company stood outside of the Netscape and Microsoft conflict. There was an extra layer of tension to his particular testimony though. “The drama was heightened immeasurably by one stark reality,” noted in one journalist’s accounting of the trial, “nobody—literally, nobody—knew what McGeady was going to say.”

When he got his chance to speak, McGeady testified that high-ranking Microsoft executives had told him that their goal was to “cut off Netscape’s air supply.” Using their monopoly position in the operating system market, Microsoft threatened computer manufacturers—many of whom Intel had regular dealings—to ship their computers with Internet Explorer or face having their Windows licenses revoked entirely.

Drawing on the language Bill Gates used in his announcement of Internet Explorer, McGeady claimed that one executive had laid out their strategy: “embrace, extend and extinguish.” According to his allegations, Microsoft never intended to enter into a competition with Netscape. They were ready to use every aggressive tactic and walk the line of legality to crush them. It was a major turning point for the case and a massive win for the DOJ.

The case against Microsoft, however, had begun years earlier, when Netscape retained a team from the antitrust law firm Sonsini Goodrich & Rosati in the summer of 1995. The legal team included outspoken anti-Microsoft crusader Gary Reback, as well as Susan Creighton. Reback would be the most public member of the firm in the coming half-decade, but it would be Creighton’s contributions that would ultimately turn the attention of the DOJ. Creighton began her career as a clerk for Supreme Court Justice Sandra Day O’Conner. She quickly developed a reputation for precision and thoroughness. Her patterned, deliberate and methodical approach made her a perfect fit for a full and complete breakdown of Microsoft’s anti-competitive strategy.

Susan Creighton (Credit: Wilson Sonsini Goorich & Rosati)

Creighton’s work with Netscape led her to write a two-hundred and twenty-two page document detailing the anti-competitive practices of Microsoft. She laid out her case plain, and simply. “It is about a monopolist (Microsoft) that has maintained its monopoly (desktop operating systems) for more than ten years. That monopoly is threatened by the introduction of a new technology (Web software)…”

The document was originally planned as a book, but Netscape feared that if the public knew just how much danger they were in from Microsoft, their stock price would plummet. Instead, Creighton and Netscape handed it off the Department of Justice.

Inside the DOJ, it would trigger a renewed interest in ongoing antitrust investigations of Microsoft. Years of subpoenaing, information gathering, and lengthy depositions would follow. After almost three years, in May of 1998, the Department of Justice and 20 state attorneys filed an antitrust suit against Microsoft, a company which had only just then crossed over a 50% share of the browser market.

“No firm should be permitted to use its monopoly power to develop a chokehold on the browser software needed to access the Internet,” announced Janet Reno—the prosecuting attorney general under President Clinton—when charges were brought against Microsoft.

At the center of the trial was not necessarily the stranglehold Microsoft had on the software of personal computers—not technically an illegal practice. It was the way they used their monopoly to directly counter competition in other markets. For instance, the practice threatening to revoke licenses to manufacturers that packaged computers with Netscape. Netscape’s account of the June 1995 meeting factored in as well (when Andreessen was asked why he had taken such detailed notes on the meeting, he replied “I thought that it might be a topic of discussion at some point with the US government on antitrust issues.”)

Throughout the trial, both publicly and privately, Microsoft reacted to scrutiny poorly. They insisted that they were right; that they were doing what was best for the customers. In interviews and depositions, Bill Gates would often come off as curt and dismissive, unable or unwilling to yield to any cessation of power. The company insisted that the browser and operating system were co-existent, one could not live without the other—a fact handily refuted by the judge when he noted that he had managed to uninstall Internet Explorer from Windows in “less than 90 seconds.” The trial became a national sensation as tech enthusiasts and news junkies waited with bated breath for each new revelation.

Microsoft President Bill Gates, left, testifies on Capitol Hill, and Tuesday, March 3, 1998. (Credit: Ken Cedeno/AP file photo)

In November of 1999, the presiding judge issued his ruling. Microsoft had, in fact, used its monopoly power and violated antitrust laws. That was followed in the summer of 2000 by a proposed remedy: Microsoft was to be broken up into two separate companies, one to handle its operating software, and the other its applications. “When Microsoft has to compete by innovating rather than reaching for its crutch of the monopoly, it will innovate more; it will have to innovate more. And the others will be free to innovate,” Iowa State Attorney General Tom Miller said after the judge’s ruling was announced.

That never happened. An appeal in 2002 resulted in a reversal of the ruling and the Department of Justice agreed to a lighter consent decree. By then, Internet Explorer’s market share stood at around 90%. The browser wars were, effectively, over.


“Are you looking for an alternative to Netscape and Microsoft Explorer? Do you like the idea of having an MDI user interface and being able to browse in multiple windows?… Is your browser slow? Try Opera.”

That short message announced Opera to the world for the first time in April of 1995, posted by the browser’s creators to a Usenet forum about Windows. The tone of the message—technically meticulous, a little pointed, yet genuinely idealistic—reflected the philosophy of Opera’s creators, Jon Stephenson von Tetzchner and Geir Ivarsøy. Opera, they claimed, was well-aligned with the ideology of the web.

Opera began as a project run out of the Norwegian telecommunications firm Telnor. Once it became stable, von Tetzchner and Ivarsøy rented space at Telnor to spin it out into an independent company. Not long after, they posted that announcement and released the first version of the Opera web browser.

The team at Opera was small, but focused and effective, loyal to the open web. “Browsers are in our blood,” Tetzchner would later say. Time and time again, the Opera team would prove that. They were staffed by the web’s true believers, and have often prided themselves on leading the development of web standards and an accessible web.

In the mid-to-late 90’s, Geir Ivarsøy was the first person to implement the CSS standard in any browser, in Opera 3.5. That would prove more than enough to convince the creator of CSS, Håkon Wium Lie, to join the company as CTO. Ian Hickson worked at Opera during the time he developed the CSS Acid Test at the W3C.

The original CSS Acid Test (Credit: Eric Meyer)

The company began developing a version of their browser for low-powered mobile devices in developing nations as early as 1998. They have often tried to push the entire web community towards web standards, leading when possible by example.

Years after the antitrust lawsuit of Microsoft, and resulting reversal in the appeal, Opera would find themselves embroiled in a conflict on a different front of the browser wars.

In 2007, Opera filed a complaint with the European Commission. Much like the case made by Creighton and Netscape, Opera alleged that Microsoft was abusing its monopoly position by bundling new versions of Internet Explorer with Windows 7. The EU had begun to look into allegations against Microsoft almost as soon as the Department of Justice, but the Opera complaint added a substantial and recent area of inquiry. Opera claimed that Microsoft was limiting user choice by making opaque additional browser options. “You could add more browsers, to give consumers a real choice between browsers, you put them in front of their eyeballs,” Lie said at the time of the complaint.

In Opera’s summary of their complaints they evoked in themselves the picture of a free and open web. Opera, they argued, were advocates of a web as the web was intended—accessible, universal, and egalitarian. Once again citing the language of “embrace, extend, and extinguish,” the company also called out Microsoft for trying to take control over the web standards process. “The complaint calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious ‘Embrace, Extend and Extinguish’ strategy,” it read.

The browser “ballot box“ (Credit: Ars Technica)

In 2010, the European Commission issued a ruling, forcing Microsoft to show a so-called “ballot box” to European users of Windows—a website users could see the first time they accessed the Internet that listed twelve alternative browsers to download, including Opera and Mozilla. Microsoft included this website in their European Windows installs for five years, until their obligation lapsed.


Netscape Navigator 5 never shipped. It echoes, unreleased, in the halls of software’s most public and recognized vaporware.

After Netscape open-sourced their browser as part of the Mozilla project, the focus of the company split. Between being acquired by AOL and continuing pressure from Microsoft, Netscape was on its last legs. The public trial of Microsoft brought some respite, but too little, too late. “It’s one of the great ironies here,” Netscape lawyer Gary Reback would later say, “after years of effort to get the government to do something, by [1998] Netscape’s body is already in the morgue.” Meanwhile, management inside of Netscape couldn’t decide how best to integrate with the Mozilla team. Rather than work alongside the open-source project, they continued to maintain a version of Netscape separate and apart from the public project.

In October of 1998, Brendan Eich—who was part of the core Mozilla team—published a post to the Mozilla blog. “It’s time to stop banging our heads on the old layout and FE codebase,” he wrote. “We’ve pulled more useful miles out of those vehicles than anyone rightly expected. We now have a great new layout engine that can view hundreds of top websites.”

Many Mozilla contributors agreed with the sentiment, but the rewrite Eich proposed would spell the project’s initial downfall. While Mozilla tinkered away on a new rendering engine for the browser—which would soon be known as Gecko—Netscape scrapped its planned version 5.

Progress ground to a halt. Zawinski, one of the Mozilla team members opposed to the rewrite, would later describe his frustration when he resigned from Netscape in 1999. “It constituted an almost-total rewrite of the browser, throwing us back six to 10 months. Now we had to rewrite the entire user interface from scratch before anyone could even browse the Web, or add a bookmark.” Scott Collins, one of the original Netscape programmers, would put it less diplomatically: “You can’t put 50 pounds of crap in a ten pound bag, it took two years. And we didn’t get out a 5.0, and that cost us everything, it was the biggest mistake ever.”

The result was a world-class browser with great standards support and a fast-running browser engine. But it wasn’t ready until April of 2000, when Netscape 6 was finally released. By then, Microsoft had eclipsed Netscape, owning 80% of the browser market. It would never be enough to take back a significant portion of that browser share.

“I really think the browser wars are over,” said one IT exec after the release of Netscape 6. He was right. Netscape would sputter out for years. As for Mozilla, that would soon be reborn as something else entirely.


The post Chapter 10: Browser Wars appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Three-Digit Browser Versions in March 2022

This isn’t supposed to be any sort of decision-making based on browser User-Agent Strings. But, ya know, collectively, we do make those decisions.

Karl Dubost notes that there is a significant change coming to them, notably moving the version integer past two digits:

According to the Firefox release calendar, during the first quarter of 2022 (probably March), Firefox Nightly will reach version 100. It will set Firefox stable release version around May 2022 (if it doesn’t change until then).

And Chrome release calendar sets a current date of March 29, 2022.

So, we’ll be looking at UAs like:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 KHTML, like Gecko) Chrome/100.0.0.0 Safari/537.36

A bad RegEx will be getting some people for sure. But even string comparison will catch people, as Karl notes:

"80" < "99" // true "80" < "100" // false parseInt("80", 10) < parseInt("99", 10) // true parseInt("80", 10) < parseInt("100", 10) // true

Might wanna search the ol’ codebase for navigator.userAgent and see what you’re doing.


The post Three-Digit Browser Versions in March 2022 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Bonsai Browser

Web-browser for research that helps programmers think clearly.

With Bonsai, rather than being like, I’m going to go use my web browser now, you hit Option + Space and it brings up a browser. It’s either full-screen or a very minimal float-over-everything window. You can visually organize things into Workspaces. I can see it being quite good for research, but also just getting you to think differently about what a “web browser” interface can be and do for you.

Perhaps for what we’re losing in browser engine diversity, we’ll gain in browser UI/UX diversity.

Direct Link to ArticlePermalink


The post Bonsai Browser appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

iOS Browser Choice

Just last week I got one of those really?! 🤨 faces when this fact came up in conversation amongst smart and engaged fellow web developers: there is no browser choice on iOS. It’s all Safari. You can download apps that are named Chrome or Firefox, or anything else, but they are just veneer over Safari. If you’re viewing a website on iOS, it’s Safari.

I should probably call it what the App Store Review Guidelines call it: WebKit. I usually think it’s more clear to refer to browsers by their common names rather than the engine behind it, since each of The Big Three web browsers have distinct engines (for now anyway), but in this case, the engine is the important bit.

I’ll say how I feel: that sucks. I have this expensive computer in my pocket and it feels unfair that it is hamstrung in this very specific way of not allowing other browser engines. I also have an Apple laptop and it’s not hamstrung in that way, and I really hope it never is.

There is, of course, all sorts of nuance to this. My Apple laptop is hamstrung in that I can’t just install whatever OS I want on it unless I do it a sanctioned way. I also like the fact that there is some gatekeeping in iOS apps, and sometimes wish it was more strict. Like when I try to download simple games for my kid, and I end up downloading some game that is so laden with upsells, ads, and dark patterns that I think the developer should be in prison. I wish Apple just wouldn’t allow that garbage on the App Store at all. So that’s me wishing for more and less gatekeeping at the same time.

But what sucks about this lack of browser choice on iOS isn’t just the philosophy of gatekeeping, it’s that WebKit on iOS just isn’t that great. See Dave’s post for a rundown of just some of the problems from a day-to-day web developer perspective that I relate to. And because WebKit has literally zero competition on iOS, because Apple doesn’t allow competition, the incentive to make Safari better is much lighter than it could (should) be.

It’s not something like Google’s AMP, where if you really dislike it you can both not use it on your own sites and redirect yourself away from them on other sites. This choice is made for you.

My ability to talk intelligently about this is dwarfed by many others though, so what I really want to do is point out some of that recent writing. Allow me to pull a quote from a bunch of them…

iOS Engine Choice In Depth — Alex Russell

None of this is theoretical; needing to re-develop features through a straw, using less-secure, more poorly tested and analyzed mechanisms, has led to serious security issues in alternative iOS browsers. Apple’s policy, far from insulating responsible WebKit browsers from security issues, is a veritable bug farm for the projects wrenched between the impoverished feature set of Apple’s WebKit and the features they can securely deliver with high fidelity on every other platform.

This is, of course, a serious problem for Apple’s argument as to why it should be exclusively responsible for delivering updates to browser engines on iOS.

Chrome is the new Safari. And so are Edge and Firefox. — Niels Leenheer

The Safari and Chrome team both want to make the web safer and work hard to improve the web. But they do have different views on what the web should be.

Google is focussing on improving the web by making it more capable. To expand the relevance of the web, to go beyond what is possible today. And that also means allowing it to compete with native apps, with which the Android team surely does not always agree.

Safari seems to focus on improving the web as it currently is. To let it be a safer place, much faster and more beautiful. And if you want something more, you can use an app for that.

Browser choice on Apple’s iOS: privacy and security aspects — Stuart Langridge

Alternative browsers on iOS aren’t just restricted to WebKit, they’re restricted to the version of WebKit which is in the current version of Safari. Not even different or more modern versions of WebKit itself are allowed.

Even motivated users who work hard to get out of the browser choice they’re forced into don’t actually get a choice; if they choose a different browser, they still get the same one. If there’s a requirement from people for something, the market can’t provide it because competition is not permitted.

Briefing to the UK Competition and Markets Authority on Apple’s iOS browser monopoly and Progressive Web Apps — Bruce Lawson

[…] these people at Echo Pharmacy, not only have they got a really great website, but they also have to build an app for iOS just because they want to send push notifications. And, perhaps ironically, given Apple’s insistence that they do all of this for security and privacy, is that if I did choose to install this app, I would also be giving it permission to access my health and fitness data, my contact info, my identifiers sensitive info, financial info, user content, user data and diagnostics. Whereas, if I had push notifications and I were using a PWA, I’d be leaking none of this data.

So, we can see that despite Apple’s claims, I cannot recommend a PWA as being an equal experience an iOS simply here because of push notifications. But it’s not just hurting current business, it’s also holding back future business.


I’ve heard precious few arguments defending Apple’s choice to only allow Safari on iOS. Vague Google can’t be trusted sentiment is the bulk of it, privacy-focused, performance forced, or both. All in all, nobody wants this complete lack of choice but Apple.

As far as I know, there isn’t any super clear language from Apple on why this requirement is in place. That would be nice to hear, because maybe then whatever the reasons are could be addressed.

We hear mind-blowing tech news all the time. I’d love to wake up one morning and have the news be “Apple now allows other browser engines on iOS.” You’ll hear a faint yesssssss in the air because I’ve screamed it so loud from my office in Bend, Oregon, you can hear it at your house.


The post iOS Browser Choice appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

A Primer on the Different Types of Browser Storage

In back-end development, storage is a common part of the job. Application data is stored in databases, files in object storage, transient data in caches… there are seemingly endless possibilities for storing any sort of data(visit https://developer.couchbase.com/comparing-document-vs-relational/ and get all the details). But data storage isn’t limited only to the back end. The front end (the browser) is equipped with many options to store data as well, including a cloud. We can boost our application performance, save user preferences, keep the application state across multiple sessions, or even different computers, by utilizing this storage.

In this article, we will go through the different possibilities to store data in the browser. We will cover three use cases for each method to grasp the pros and cons. In the end, you will be able to decide what storage is the best fit for your use case. So let’s start!

The localStorage API

localStorage is one of the most popular storage options in the browser and the go-to for many developers. The data is stored across sessions, never shared with the server, and is available for all pages under the same protocol and domain. Storage is limited to ~5MB.

Surprisingly, the Google Chrome team doesn’t recommend using this option as it blocks the main thread and is not accessible to web workers and service workers. They launched an experiment, KV Storage, as a better version, but it was just a trial that doesn’t seem to have gone anywhere just yet.

The localStorage API is available as window.localStorage and can save only UTF-16 strings. We must make sure to convert data to strings before saving it into localStorage. The main three functions are:

  • setItem('key', 'value')
  • getItem('key')
  • removeItem('key')

They’re all synchronous, which makes it simple to work with, but they block the main thread.

It’s worth mentioning that localStorage has a twin called sessionStorage. The only difference is that data stored in sessionStorage will last only for the current session, but the API is the same.

Let’s see it in action. The first example demonstrates how to use localStorage for storing the user’s preferences. In our case, it’s a boolean property that turns on or off the dark theme of our site.

You can check the checkbox and refresh the page to see that the state is saved across sessions. Take a look at the save and load functions to see how I convert the value to string and how I parse it. It’s important to remember that we can store only strings.

This second example loads Pokémon names from the PokéAPI.

We send a GET request using fetch and list all the names in a ul element. Upon getting the response, we cache it in the localStorage so our next visit can be much faster or even work offline. We have to use JSON.stringify to convert the data to string and JSON.parse to read it from the cache.

In this last example, I demonstrate a use case where the user can browse through different Pokémon pages, and the current page is saved for the next visits.

The issue with localStorage, in this case, is that the state is saved locally. This behavior doesn’t allow us to share the desired page with our friends. Later, we will see how to overcome this issue.

We will use these three examples in the next storage options as well. I forked the Pens and just changed the relevant functions. The overall skeleton is the same for all methods.

The IndexedDB API

IndexedDB is a modern storage solution in the browser. It can store a significant amount of structured data — even files, and blobs. Like every database, IndexedDB indexes the data for running queries efficiently. It’s more complex to use IndexedDB. We have to create a database, tables, and use transactions.

Compared to localStorage , IndexedDB requires a lot more code. In the examples, I use the native API with a Promise wrapper, but I highly recommend using third-party libraries to help you out. My recommendation is localForage because it uses the same localStorage API but implements it in a progressive enhancement manner, meaning if your browser supports IndexedDB, it will use it; and if not, it will fall back to localStorage.

Let’s code, and head over to our user preferences example!

idb is the Promise wrapper that we use instead of working with a low-level events-based API. They’re almost identical, so don’t worry. The first thing to notice is that every access to the database is async, meaning we don’t block the main thread. Compared to localStorage, this is a major advantage.

We need to open a connection to our database so it will be available throughout the app for reading and writing. We give our database a name, my-db, a schema version, 1, and an update function to apply changes between versions. This is very similar to database migrations. Our database schema is simple: only one object store, preferences. An object store is the equivalent of an SQL table. To write or read from the database, we must use transactions. This is the tedious part of using IndexedDB. Have a look at the new save and load functions in the demo.

No doubt that IndexedDB has much more overhead and the learning curve is steeper compared to localStorage. For the key value cases, it might make more sense to use localStorage or a third-party library that will help us be more productive.

Application data, such as in our Pokémon example, is the forte of IndexedDB. You can store hundreds of megabytes and even more in this database. You can store all the Pokémon in IndexedDB and have them available offline and even indexed! This is definitely the one to choose for storing app data.

I skipped the implementation of the third example, as IndexedDB doesn’t introduce any difference in this case compared to localStorage. Even with IndexedDB, the user will still not share the selected page with others or bookmark it for future use. They’re both not the right fit for this use case.

Cookies

Using cookies is a unique storage option. It’s the only storage that is also shared with the server. Cookies are sent as part of every request. It can be when the user browses through pages in our app or when the user sends Ajax requests. This allows us to create a shared state between the client and the server, and also share state between multiple applications in different subdomains. This is not possible by other storage options that are described in this article. One caveat: cookies are sent with every request, which means that we have to keep our cookies small to maintain a decent request size.

The most common use for cookies is authentication, which is out of the scope of this article. Just like the localStorage, cookies can store only strings. The cookies are concatenated into one semicolon-separated string, and they are sent in the cookie header of the request. You can set many attributes for every cookie, such as expiration, allowed domains, allowed pages, and many more.

In the examples, I show how to manipulate the cookies through the client-side, but it’s also possible to change them in your server-side application.

Saving the user’s preferences in a cookie can be a good fit if the server can utilize it somehow. For example, in the theme use case, the server can deliver the relevant CSS file and reduce potential bundle size (in case we’re doing server-side-rendering). Another use case might be to share these preferences across multiple subdomain apps without a database.

Reading and writing cookies with JavaScript is not as straightforward as you might think. To save a new cookie, you need to set document.cookie — check out the save function in the example above. I set the dark_theme cookie and add it a max-age attribute to make sure it will not expire when the tab is closed. Also, I add the SameSite and Secure attributes. These are necessary because CodePen uses iframe to run the examples, but you will not need them in most cases. Reading a cookie requires parsing the cookie string.

A cookie string looks like this:

key1=value1;key2=value2;key3=value3

So, first, we have to split the string by semicolon. Now, we have an array of cookies in the form of key1=value1, so we need to find the right element in the array. In the end, we split by the equal sign and get the last element in the new array. A bit tedious, but once you implement the getCookie function (or copy it from my example :P) you can forget it.

Saving application data in a cookie can be a bad idea! It will drastically increase the request size and will reduce application performance. Also, the server cannot benefit from this information as it’s a stale version of the information it already has in its database. If you use cookies, make sure to keep them small.

The pagination example is also not a good fit for cookies, just like localStorage and IndexedDB. The current page is a temporary state that we would like to share with others, and any of these methods do not achieve it.

URL storage

URL is not a storage, per se, but it’s a great way to create a shareable state. In practice, it means adding query parameters to the current URL that can be used to recreate the current state. The best example would be search queries and filters. If we search the term flexbox on CSS-Tricks, the URL will be updated to https://css-tricks.com/?s=flexbox. See how easy it is to share a search query once we use the URL? Another advantage is that you can simply hit the refresh button to get newer results of your query or even bookmark it.

We can save only strings in the URL, and its maximum length is limited, so we don’t have so much space. We will have to keep our state small. No one likes long and intimidating URLs.

Again, CodePen uses iframe to run the examples, so you cannot see the URL actually changing. Worry not, because all the bits and pieces are there so you can use it wherever you want.

We can access the query string through window.location.search and, lucky us, it can be parsed using the URLSearchParams class. No need to apply any complex string parsing anymore. When we want to read the current value, we can use the get function. When we want to write, we can use set. It’s not enough to only set the value; we also need to update the URL. This can be done using history.pushState or history.replaceState, depending on the behavior we want to accomplish.

I wouldn’t recommend saving a user’s preferences in the URL as we will have to add this state to every URL the user visits, and we cannot guarantee it; for example, if the user clicks on a link from Google Search.

Just like cookies, we cannot save application data in the URL as we have minimal space. And even if we did manage to store it, the URL will be long and not inviting to click. Might look like a phishing attack of sorts.

Just like our pagination example, the temporary application state is the best fit for the URL query string. Again, you cannot see the URL changes, but the URL updates with the ?page=x query parameter every time you click on a page. When the web page loads, it looks for this query parameter and fetches the right page accordingly. Now we can share this URL with our friends so they can enjoy our favorite Pokémon.

Cache API

Cache API is a storage for the network level. It is used to cache network requests and their responses. The Cache API fits perfectly with service workers. A service worker can intercept every network request, and using the Cache API, it can easily cache both the requests. The service worker can also return an existing cache item as a network response instead of fetching it from the server. By doing so, you can reduce network load times and make your application work even when offline. Originally, it was created for service workers but in modern browsers the Cache API is available also in window, iframe, and worker contexts as-well. It’s a very powerful API that can improve drastically the application user experience.

Just like IndexedDB the Cache API storage is not limited and you can store hundreds of megabytes and even more if you need to. The API is asynchronous so it will not block your main thread. And it’s accessible through the global property caches.

To read more about the Cache API, the Google Chrome team has made a great tutorial.

Chris created an awesome Pen with a practical example of combining service workers and the Cache API.

Bonus: Browser extension

If you build a browser extension, you have another option to store your data. I discovered it while working on my extension, daily.dev. It’s available via chrome.storage or browser.storage, if you use Mozilla’s polyfill. Make sure to request a storage permission in your manifest to get access.

There are two types of storage options, local and sync. The local storage is self-explanatory; it means it isn’t shared and kept locally. The sync storage is synced as part of the Google account and anywhere you install the extension with the same account this storage will be synced. Pretty cool feature if you ask me. Both have the same API so it’s super easy to switch back-and-forth, if needed. It’s async storage so it doesn’t block the main thread like localStorage. Unfortunately, I cannot create a demo for this storage option as it requires a browser extension but it’s pretty simple to use, and almost like localStorage. For more information about the exact implementation, refer to Chrome docs.

Conclusion

The browser has many options we can utilize to store our data. Following the Chrome team’s advice, our go-to storage should be IndexedDB. It’s async storage with enough space to store anything we want. localStorage is not encouraged, but is easier to use than IndexedDB. Cookies are a great way to share the client state with the server but are mostly used for authentication.

If you want to create pages with a shareable state such as a search page, use the URL’s query string to store this information. Lastly, if you build an extension, make sure to read about chrome.storage.


The post A Primer on the Different Types of Browser Storage appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]