Tag: Requests

Better Collaboration With Pull Requests

This article is part of our “Advanced Git” series. Be sure to follow us on Twitter or sign up for our newsletter to hear about the next articles!

In this third installment of our “Advanced Git” series, we’ll look at pull requests — a great feature which helps both small and larger teams of developers. Pull requests not only improve the review and the feedback process, but they also help tracking and discussing code changes. Last, but not least, pull requests are the ideal way to contribute to other repositories you don’t have write access to.

Advanced Git series:

  • Part 1: Creating the Perfect Commit in Git
  • Part 2: Branching Strategies in Git
  • Part 3: Better Collaboration With Pull Requests
    You are here!
  • Part 4: Merge Conflicts
    Coming soon!
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

What are pull requests?

First of all, it’s important to understand that pull requests are not a core Git feature. Instead, they are provided by the Git hosting platform you’re using: GitHub, GitLab, Bitbucket, AzureDevops and others all have such a functionality built into their platforms.

Why should I create a pull request?

Before we get into the details of how to create the perfect pull request, let’s talk about why you would want to use this feature at all.

Imagine you’ve just finished a new feature for your software. Maybe you’ve been working in a feature branch, so your next step would be merging it into the mainline branch (master or main). This is totally fine in some cases, for example, if you’re the only developer on the project or if you’re experienced enough and know for certain your team members will be happy about it.

By the way: If you want to know more about branches and typical branching workflows, have a look at our second article in our “Advanced Git” series: “Branching Strategies in Git.”

Without a pull request, you would jump right to merging your code.

However, what if your changes are a bit more complex and you’d like someone else to look at your work? This is what pull requests were made for. With pull requests you can invite other people to review your work and give you feedback. 

A pull request invites reviewers to provide feedback before merging.

Once a pull request is open, you can discuss your code with other developers. Most Git hosting platforms allow other users to add comments and suggest changes during that process. After your reviewers have approved your work, it might be merged into another branch.

A pull request invites reviewers to provide feedback before merging.

Having a reviewing workflow is not the only reason for pull requests, though. They come in handy if you want to contribute to other repositories you don’t have write access to. Think of all the open source projects out there: if you have an idea for a new feature, or if you want to submit a patch, pull requests are a great way to present your ideas without having to join the project and become a main contributor.

This brings us to a topic that’s tightly connected to pull requests: forks.

Working with forks

A fork is your personal copy of an existing Git repository. Going back to our Open Source example: your first step is to create a fork of the original repository. After that, you can change code in your own, personal copy.

Creating a fork of the original respository is where you make changes.

After you’re done, you open a pull request to ask the owners of the original repository to include your changes. The owner or one of the other main contributors can review your code and then decide to include it (or not).

Two red database icons with gold arrows pointing at opposite directions between the database. The database on the left as a lock icon next to it that is circled in gold.

Important Note: Pull requests are always based on branches and not on individual commits! When you create a pull request, you base it on a certain branch and request that it gets included.

Making a reviewer’s life easier: How to create a great pull request

As mentioned earlier, pull requests are not a core Git feature. Instead, every Git platform has its own design and its own idea about how a pull request should work. They look different on GitLab, GitHub, Bitbucket, etc. Every platform has a slightly different workflow for tracking, discussing, and reviewing changes.

A layered collage of Git-based websites. Bitbucket is on top, followed by GitHub, then GitLab.

Desktop GUIs like the Tower Git client, for example, can make this easier: they provide a consistent user interface, no matter what code hosting service you’re using.

Animated screenshot of a pull request in the Tower application. A pull requests panel is open showing a pull request by the author that, when clicked, reveals information about that pull request on the right. The app has a dark interface.

Having said that, the general workflow is always the same and includes the following steps:

  1. If you don’t have write access to the repository in question, the first step is to create a fork, i.e. your personal version of the repository.
  2. Create a new local branch in your forked repository. (Reminder: pull requests are based on branches, not on commits!)
  3. Make some changes in your local branch and commit them.
  4. Push the changes to your own remote repository.
  5. Create a pull request with your changes and start the discussion with others.

Let’s look at the pull request itself and how to create one which makes another developer’s life easier. First of all, it should be short so it can be reviewed quickly. It’s harder to understand code when looking at 3,000 lines instead of 30 lines. 

Also, make sure to add a good and self-explanatory title and a meaningful description. Try to describe what you changed, why you opened the pull request, and how your changes affect the project. Most platforms allow you to add a screenshot which can help to demonstrate the changes.

Approve, merge, or decline?

Once your changes have been approved, you (or someone with write access) can merge the forked branch into the main branch. But what if the reviewer doesn’t want to merge the pull request in its current state? Well, you can always add new commits, and after pushing that branch, the existing pull request is updated.

Alternatively, the owner or someone else with write access can decline the pull request when they don’t want to merge the changes.

Safety net for developers

As you can see, pull requests are a great way to communicate and collaborate with your fellow developers. By asking others to review your work, you make sure that only high-quality code enters your codebase. 

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Advanced Git series:

  • Part 1: Creating the Perfect Commit in Git
  • Part 2: Branching Strategies in Git
  • Part 3: Better Collaboration With Pull Requests
    You are here!
  • Part 4: Merge Conflicts
    Coming soon!
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

The post Better Collaboration With Pull Requests appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,

How to Cancel Pending API Requests to Show Correct Data

I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.

We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.

Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.

The solution

Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.

The AbortController interface represents a controller object that allows you to abort one or more Web requests as and when desired.

Mozilla Developer Network

The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:

const abortController = new AbortController() const signal = abortController.signal fetch(url, { signal })

From here on, we can call abortController.abort() to make sure our pending fetch is aborted.

Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:

The code is mostly the same with few key distinctions:

  1. It creates a new cached variable, abortController, in a useRef in the <App /> component.
  2. For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
  3. It passes the obtained AbortSignal to the fetch() call.
  4. It aborts itself on the next fetch.
const App = () => {  // Same as before, local variable and state declaration  // ...   // Create a new cached variable abortController in a useRef() hook  const abortController = React.useRef()   React.useEffect(() => {   // If there is a pending fetch request with associated AbortController, abort   if (abortController.current) {     abortController.abort()   }   // Assign a new AbortController for the latest fetch to our useRef variable   abortController.current = new AbortController()   const { signal } = abortController.current    // Same as before   fetch(url, { signal }).then(res => {     // Rest of our fetching logic, same as before   })  }, [   abortController,   sortByString,   upperPrice,   lowerPrice,  ]) }

Conclusion

That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.


The post How to Cancel Pending API Requests to Show Correct Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Stay DRY Using axios for API Requests

HTTP requests are a crucial part of any web application that’s communicating with a back-end server. The front end needs some data, so it asks for it via a network HTTP request (or Ajax, as it tends to be called), and the server returns an answer. Almost every website these days does this in some fashion.

With a larger site, we can expect to see more of this. More data, more APIs, and more special circumstances. As sites grow like this, it is important to stay organized. One classic concept is DRY (short for Don’t Repeat Yourself), which is the process of abstracting code to prevent repeating it over and over. This is ideal because it often allows us to write something once, use it in multiple places, and update in a single place rather than each instance.

We might also reach for libraries to help us. For Ajax, axios is a popular choice. You might already be familiar with it, and even use it for things like independent POST and GET requests while developing. 

Installation and the basics

It can be installed using npm (or yarn):

npm install axios

An independent POST request using Axios looks like this:

axios.post('https://axios-app.firebaseio.com/users.json', formData)   .then(res => console.log(res))   .catch(error => console.log(error))

Native JavaScript has multiple ways of doing JavaScript too. Notably, fetch(). So why use a library at all? Well, for one, error handling in fetch is pretty wonky. You’ll have a better time with axios right out of the gate with that. If you’d like to see a comparison, we have an article that covers both and an article that talks about the value of abstraction with stuff like this.

Another reason to reach for axios? It gives us more opportunities for DRYness, so let’s look into that. 

Global config

We can set up a global configuration (e.g. in our main.js file) that handles all application requests using a standard configuration that is set through a default object that ships with axios. 

This object contains:

  • baseURL: A relative URL that acts as a prefix to all requests, and each request can append the URL
  • headers: Custom headers that can be set based on the requests
  • timeout: The point at which the request is aborted, usually measured in milliseconds. The default value is 0, meaning it’s not applicable.
  • withCredentials: Indicates whether or not cross-site Access-Control requests should be made using credentials. The default is false.
  • responseType: Indicates the type of data that the server will return, with options including json (default), arraybuffer, document, text, and stream.
  • responseEncoding: Indicates encoding to use for decoding responses. The default value is utf8.
  • xsrfCookieName: The name of the cookie to use as a value for XSRF token, the default value is XSRF-TOKEN.
  • xsrfHeaderName: The name of the HTTP header that carries the XSRF token value. The default value is X-XSRF-TOKEN.
  • maxContentLength: Defines the max size of the HTTP response content in bytes allowed
  • maxBodyLength: Defines the max size of the HTTP request content in bytes allowed

Most of time, you’ll only be using baseURLheader, and maybe timeout. The rest of them are less frequently needed as they have smart defaults, but it’s nice to know there are there in case you need to fix up requests.

This is the DRYness at work. For each request, we don’t have to repeat the baseURL of our API or repeat important headers that we might need on every request. 

Here’s an example where our API has a base, but it also has multiple different endpoints. First, we set up some defaults:

/ main.js import axios from 'axios'; 
 axios.defaults.baseURL = 'https://axios-app.firebaseio.com' // the prefix of the URL axios.defaults.headers.get['Accept'] = 'application.json'   // default header for all get request axios.defaults.headers.post['Accept'] = 'application.json'  // default header for all POST request 
 Then, in a component, we can use axios more succinctly, not needing to set those headers, but still having an opportunity to customize the final URL endpoint: 
 // form.js component import axios from 'axios'; 
 export default {   methods : {     onSubmit () {       // The URL is now https://axios-app.firebaseio.com/users.json       axios.post('/users.json', formData)         .then(res => console.log(res))         .catch(error => console.log(error))     }   } }

Note: This example is in Vue, but the concept extends to any JavaScript situation.

Custom instance

Setting up a “custom instance” is similar to a global config, but scoped to specified components. So, it’s still a DRY technique, but with hierarchy. 

We’ll set up our custom instance in a new file (let’s call it authAxios.js) and import it into the “concern” components.

// authAxios.js import axios from 'axios' 
 const customInstance = axios.create ({   baseURL : 'https://axios-app.firebaseio.com' }) customInstance.defaults.headers.post['Accept'] = 'application.json' 
 // Or like this... 
 const customInstance = axios.create ({   baseURL : 'https://axios-app.firebaseio.com',   headers: {'Accept': 'application.json'} })

And then we import this file into the form components:

// form.js component 
 // import from our custom instance import axios from './authAxios' 
 export default {   methods : {     onSubmit () {       axios.post('/users.json', formData)       .then(res => console.log(res))       .catch(error => console.log(error))     }   } }

Interceptors

Interceptors helps with cases where the global config or custom instance might be too generic, in the sense that if you set up an header within their objects, it applies to the header of every request within the affected components. Interceptors have the ability to change any object properties on the fly. For instance, we can send a different header (even if we have set one up in the object) based on any condition we choose within the interceptor.

Interceptors can be in the main.js file or a custom instance file. Requests are intercepted after they’ve been sent out and allow us to change how the response is handled.

// Add a request interceptor axios.interceptors.request.use(function (config) {   // Do something before request is sent, like we're inserting a timeout for only requests with a particular baseURL   if (config.baseURL === 'https://axios-app.firebaseio.com/users.json') {      config.timeout = 4000    } else {      return config   }   console.log (config)     return config;   }, function (error) {   // Do something with request error   return Promise.reject(error); });   // Add a response interceptor axios.interceptors.response.use(function (response) {   // Do something with response data like console.log, change header, or as we did here just added a conditional behaviour, to change the route or pop up an alert box, based on the reponse status     if (response.status === 200 || response.status 201) {     router.replace('homepage') }   else {     alert('Unusual behaviour')   }   console.log(response)   return response; }, function (error) {   // Do something with response error   return Promise.reject(error); });

Interceptors, as the name implies, intercept both requests and responses to act differently based on whatever conditions are provided. For instance, in the request interceptor above, we inserted a conditional timeout only if the requests have a particular baseURL. For the response, we can intercept it and modify what we get back, like change the route or have an alert box, depending on the status code. We can even provide multiple conditions based on different error codes.

Interceptors will prove useful as your project becomes larger and you start to have lots of routes and nested routes all communicating to servers based on different triggers. Beyond the conditions I set above, there are many other situations that can warrant the use of interceptors, based on your project.

Interestingly, we can eject an interceptor to prevent it from having any effect at all. We’ll have to assign the interceptor to a variable and eject it using the appropriately named eject method.

const reqInterceptor = axios.interceptors.request.use(function (config) {   // Do something before request is sent, like we're inserting a timeout for only requests with a particular baseURL   if (config.baseURL === 'https://axios-app.firebaseio.com/users.json') {     config.timeout = 4000   } else {     return config   }   console.log (config)   return config; }, function (error) {       // Do something with request error   return Promise.reject(error); });   // Add a response interceptor const resInterceptor = axios.interceptors.response.use(function (response) {   // Do something with response data like console.log, change header, or as we did here just added a conditional behaviour, to change the route or pop up an alert box, based on the reponse status     if (response.status === 200 || response.status 201) {     router.replace('homepage')   } else {     alert('Unusual behaviour')   }   console.log(response)   return response; }, function (error) {   // Do something with response error   return Promise.reject(error); }); 
 axios.interceptors.request.eject(reqInterceptor); axios.interceptors.request.eject(resInterceptor);

Although it’s less commonly used, it’s possible to put and interceptor into a conditional statement or remove one based on some event.


Hopefully this gives you a good idea about the way axios works as well as how it can be used to keep API requests DRY in an application. While we scratched the surface by calling out common use cases and configurations, axis has so many other advantages you can explore in the documentation, including the ability to cancel requests and protect against cross-site request forgery , among other awesome possibilities.

The post Stay DRY Using axios for API Requests appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Automatically compress images on Pull Requests

Sarah introduced us to GitHub Actions right after it dropped about a year ago. Now they have improved the feature and are touting its CI/CD abilities. Run tests, do deployment, do whatever stuff computers do! It’s essentially a YAML file that says run this, then this, then this, etc., with configuration.

GitLab kinda paved the way on this particular feature, although you don’t get the machines for free on GitLab, nor does it seems like there is an ecosystem of tasks to build your actions workflow from.

It’s that ecosystem of tasks that I would think makes this especially interesting. “Democratizing DevOps,” if I’m feeling saucy. Karolina Szczur and Ben Schwarz’s new action to automatically optimize all images in a pull request showcases that. This makes it almost trivially easy to add to any Git(Hub)-based workflow and has huge obvious wins. Perhaps the future is peicing together our own pipelines from open-source efforts like this as needed.

Looks nice, eh?

Direct Link to ArticlePermalink

The post Automatically compress images on Pull Requests appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]