Apps

Introduction to Docker Containers

150 150 DevGate

Author: Qamar Khurshid,

Docker containers are a popular and efficient way to package and deploy applications, and the Docker command-line interface (CLI) provides a convenient way to manage and deploy containers. In this blog post, we’ll take a closer look at the Docker CLI and some of its basic commands, and explain how to use them to deploy Docker containers.

The Docker CLI is a tool that allows users to interact with Docker from the command line, and provides a wide range of commands for managing and deploying Docker containers. Some of the most commonly used Docker CLI commands include:

docker run: This command is used to run a Docker container. It takes a Docker image as input, and creates a new container based on that image.

docker ps: This command lists all running Docker containers on the host machine.

docker stop: This command stops a running Docker container. It takes the container’s name or ID as input.

docker rm: This command removes a stopped Docker container. It takes the container’s name or ID as input.

docker build: This command is used to build a Docker image from a Dockerfile. A Dockerfile is a text file that contains the instructions for building a Docker image.

To deploy a Docker container, you first need to create a Docker image. This can be done using the docker build command, which takes a Dockerfile as input and produces a Docker image as output. Once you have a Docker image, you can use the docker run command to create a new container based on that image, and then start the container using the docker start command.

For example, let’s say you have a simple Node.js application that you want to deploy as a Docker container. First, you would create a Dockerfile that specifies the instructions for building a Docker image for the application. This might look something like this:

Next, you can use the docker build command to build a Docker image from the Dockerfile:

docker build -t my-node-app .

This will create a Docker image named my-node-app based on the instructions in the Dockerfile. Once you have the Docker image, you can use the docker run command to create and start a Docker container based on the image:

docker run -d -p 3000:3000 –name my-node-app my-node-app

This will create a new Docker container named my-node-app, and start it in detached mode (-d). It will also map port 3000 on the host machine to port 3000 on the container (-p 3000:3000), which will allow you to access the application from the host machine.

To verify that the container is running, you can use the docker ps command, which will list all running Docker containers on the host machine:

docker ps

This will show the running Docker containers, along with their names

Lazy Loading with React

150 150 DevGate

Author: Muhammad Fraz,

The world of front-end development is constantly evolving, and people are creating more and more complex and powerful applications every day. Naturally, this led to massive code bundles that can drastically increase app load times and negatively impact the user experience. This is where lazy loading comes in.

What is Lazy Loading?

Lazy loading is a design pattern for optimizing web and mobile apps.

At the point when we launch a React web application, it normally packages the whole application immediately, stacking everything including the whole web application pages, pictures, content, and considerably more for us, possibly bringing about a sluggish burden time and overall poor performance, depending on the size of the content and the internet bandwidth at the time.

In earlier versions of React, lazy loading was implemented using third-party libraries. However, React JS introduced two new native functions to implement lazy loading with React v16.6 update.

In this tutorial, we’ll show you how lazy loading works in React.js, demonstrate how to use code splitting and lazy loading with React.lazy and React.Suspense, and create a React demo app to see these concepts in action.

The Benefits of lazy loading

The essential benefits of languid stacking are execution related:

  • Fast initial loading: By decreasing the page weight, lethargic stacking a site page considers a quicker starting page load time.
  • Less bandwidth consumption: Lazy-loaded images save information and transfer speed, especially valuable for people who don’t have fast internet.
  • Decreased work for the browser: When pictures are lazy-loaded, your browser does not need to process or decode them until they are requested by scrolling the page.

React.lazy() is a function that empowers you to ender a dynamic import as a regular component. Dynamic imports are a method of code-splitting. It takes out the need to utilize an third party library, for example, react-loadable, react-waypoint

// without React.lazy()
import NewComponent from ‘./NewComponent ‘;

const MyComponent = () => (

)

// with React.lazy()
const NewComponent = React.lazy(() => import(‘./NewComponent));

const MyComponent = () => (

)

React.Suspense enables you to determine the loading indicator in the event that the components in the tree below it are not yet ready to render.
When all the lazy components are loaded, other React elements can be shown as placeholder content by passing a fallback prop to the suspense component. it allows you to define the loading indicator if the components in the tree below it are not yet ready to render.

import React, { Suspense } from “react”;

const LazyComponent = React.lazy(() => import(‘./NewComponent)); const LazyComponent1 = React.lazy(() => import(‘./NewComponent1));

const MyComponent = ( ) => (

Loading…

}>

{* Here you can add more lazy components.. *}

)

The Disadvantages of lazy loading

As already mentioned, lazy loading has many advantages. However, overuse can have a significant negative impact on your application. Therefore, it’s important to understand when you should and when not to use lazy loading. The disadvantages are mentioned below

  • Not suitable for small-scale applications.
  • Requires additional communication with the server to fetch resources.
  • Can affect SEO and ranking.

AWS QuickSight vs Microsoft Power BI

150 150 DevGate

Author: Muhammad Zaki Khurshid,

Business Intelligence – BI tools such as AWS QuickSight, Tableau, Power BI, IBM Cognos (and many more) are designed to assist companies in generating business insights with the help of visuals. Since BI market is highly competitive, therefore, the companies that have provided these solutions have added distinct features in order to target a certain customer base that might use those features in their business requirements.
In this article, we shall make a brief comparison between the two Business Intelligence solutions: AWS QuickSight and Microsoft Power BI. We will first talk about the two technologies separately, highlighting the key features that each tool provides, and also the pros and cons of using the two.

AWS QuickSight

AWS QuickSight is a cloud-based BI solution (which runs on Amazon Web Services Platform) that you can use to build visuals, perform ad-hoc analysis, generate business insights, and share the results with others. AWS QuickSight connects to a variety of data sources, including AWS data (S3, Athena, Redshift etc.), third-party data, spreadsheet data, and more. QuickSight processes the data through SPICE, which stands for Super-fast, Parallel, In-memory Calculation Engine. Amazon claims that it is a robust in-memory engine that performs advanced calculations and serve data. If you want to create a dataset in QuickSight, you can either import it into SPICE, or perform a direct query (which is a way of directly querying the data instead of importing it into the tool). It is recommended to use SPICE to load the data so that QuickSight would be able to access it quickly and efficiently. On the other hand, direct query accesses the data by querying the source data directly, but this method is considered inefficient in QuickSight because data is queried every time a change is made in the analysis.

Architecture

Here is a high-level architecture of QuickSight. As a summary, Data source connects to SPICE, which loads and processes (data cleaning, transformations etc.) the data. This data is then fed into QuickSight for data visualization.

Key Features

The visual presentation of QuickSight is one of its key selling points. Although the quantity of visual types might not be there, but for QuickSight, it’s about how it appeals to the end-user. Following are some of the major components/key features in AWS QuickSight:

  • Visuals – These are the components you use to represent your data in the form of visuals. You have bar charts, box plots, combo charts, heat maps, KPIs, line charts, and many more visual types that you can use to create meaningful reports.
  • Insights – As the name suggests, this feature allows you to generate insights with the help of built-in machine learning algorithms. This feature is quite useful because it allows you to interpret your data in a way that might add value to your analysis.
  • Sheets – These are like separate pages that you see in Power BI, where you can keep a group of visuals on a single page. You can have one sheet showing visuals that represent the sales of a company, and another page showing visuals related to inventory analysis.
  • Simplicity – Although this is not a proper ‘feature’, it is something certainly important for this BI tool to appeal to the market. Even people without much technical knowledge can easily explore data and extract valuable insights because of the simplistic and intuitive nature of the tool. You are most of the times performing simple operations (related to arithmetic, string, dates etc.) on the data, data type changes, and dragging and dropping fields on to the visuals.
  • Speed – Speed is a major selling point for QuickSight, due to its SPICE Engine.

Using all of the features provided by QuickSight, users can create meaningful, beautiful, and interactive reports to assist stakeholders in various business areas, such as:

  • Marketing.
  • Finance.
  • Sales.

Pros and Cons

Just like any other tool, QuickSight has its pros and cons. Here are some of the pros of using this tool.

  • Easy to Use – As mentioned earlier, QuickSight is very simple and intuitive to use. The users can configure and start using the tool in no time. It also takes less time to learn the tool, so if this is your first BI tool, working on the data and creating visuals would seem very easy.
  • Everything is on the Cloud – As QuickSight runs on the AWS platform, you don’t really need to set up anything on your system. You would just need a working AWS account, a subscription, and a network connection so that you can easily access the tool via web. Even if you have a low-end system, Quicksight would run flawlessly since everything is hosted on the AWS cloud platform. You can also access the tool via Android or IOS device, since the integration on mobile devices is also excellent, and would allow the users to view the content in a seamless manner.
  • Quality of the Visuals – QuickSight has some stunning visual types in its collection. Although they are limited in quantity, they can certainly make a huge difference in visual presentation.
  • Pricing – QuickSights pricing is pretty optimal for an average user, compared to other platforms. For more information on pricing, please visit the link here.
  • Speed – This is one of the key features of using QuickSight. Because of the SPICE engine, data loading and processing becomes a great experience for all levels of users.

Now that we have highlighted the pros, here are some of the cons of using QuickSight.

  • Limited Visual Types – As mentioned earlier, QuickSight has quality visuals, but they are limited in quantity. So, if you need a visual that is not present in the collection, you might need to look for an alternative within the available visuals set.
  • Simplicity – The ease of use and simplicity was highlighted as an advantage, but it’s also one of its major disadvantages. Now this totally depends upon the use-case. If your reports require simple data connectivity, simple calculations, and visuals that only need fields to be dragged and dropped on to them, then QuickSight is a great choice. But for cases where we have to perform high level transformations, calculations, and complex reporting, this tool is not an optimal one. For complex reporting, there are tools such as Power BI, which we shall talk about in the next section.
  • Still New to the Scene – Quicksight is still pretty new in the BI market. So, this solution has to play catch-up with other competitors such as Tableau, Power BI, in terms of adding new features which would support complex data processing, reporting, or sharing, so that it appeals to the mass market and in particular, the big corporations.

Now that we have talked briefly about QuickSight, let’s take a look at its competitor: Microsoft Power BI.

Microsoft Power BI

As the name suggests, Power BI is a Business Intelligence software product created by Microsoft. It combines business analytics, data visualizations, and best practices that help an organization make decisions. Power BI is also one of the leading BI solutions in the market, and many have ranked it as the best BI solution out there. Although the ranking is quite subjective, still, it is fair to say that Power BI is considered as a mainstream solution in the BI domain.

Architecture

Let’s talk about the high-level architecture of Power BI. To demonstrate this, here is a diagram showing the various components of Power BI. If you’re already familiar with Power BI, you should notice that I have excluded Power BI Report Server from the diagram. While it is also a part of this eco-system, the only major difference between Power BI Report Server and Power BI Service is that the former is on premise report sharing platform, whereas the latter is cloud-based report sharing platform. With that said, here is the diagram.

Let’s break down this architecture diagram. Usually, these are the components of a report in Power BI:

  • Data Source – Power BI connects to a variety of data sources and uses the data from them to create reports. It can connect to databases (SQL Server, Redshift, Oracle SQL Server etc.), spreadsheets, JSON, XML, Sharepoint Folder, and many more. If you want to read more about the compatible data sources, click the link here.
  • Power BI Desktop – This is the desktop application that is used by developers to ingest the data, process the data (data transformations and modelling), create visuals and then publish the report to the cloud (Power BI Service), or on-premises server (Power BI Report Server). Power BI Desktop is primarily used for developing the report, so you would find all the options here that could be used to create a report specific to your requirement. This application is only available on Windows, so if you are using MacOS, you might need to install a VM and run the app there.
  • Power BI Service – This is the cloud platform that allows you to share reports with the stakeholders. The developers create reports using Power BI Desktop, and then publish them to Power BI Service so that the end user can generate insights, in order for them to make data-driven decisions. Power BI Service has various features, that allow you to create workspaces, configure dashboards, workspace apps, dataflows, and much more. Also keep in mind that Power BI Service is not for developing BI reports, so if you have any major changes that you need to make in the existing report, it will be done on Power BI Desktop.
  • Browser & Mobile Apps – Once the report has been published to Power BI Service, users can easily view it using their web browser, or a dedicated app that is present on Android or IOS devices. To see the reports, users would need to have access to their accounts.

Creating and Sharing a Report in Power BI

To create a new visualizations report, you would need to use the Power BI Desktop app, because as we mentioned earlier, this app is primarily used for report development purposes. If you don’t have the app installed, you can easily download it from Microsoft’s website, or you can download it from the Microsoft Store. Personally, I prefer the store option since it allows for automatic updates on the app. Once you open the app, you are welcomed with a beautiful UI of Power BI Desktop, which looks like this.

As you can see, we have the blank canvas at the center where you place all of your visuals, on top we have various options where you connect to the data sources, go to the Power Query Editor (which we shall talk about later), add Measures/Calculated Columns, go to view tab, etc. On the right, you have standard visual types which you can drag to the canvas to create a visualization. Apart from the standard visuals, you also have the option to download custom visuals from the built-in store. These custom-visuals are made by developers from around the world and are either paid, or free. On the left, you have three different views which are report view , table view , and modeling view . The report view is primarily used for creating visualizations, the table view for looking at the loaded data, adding calculated columns or tables, changing data types, and modeling view for creating relations between tables, hiding certain fields/tables, etc.
Now let’s jump to the first key component of creating a report, which is connecting to a data source. In Power BI Desktop, you can connect to a variety of data sources and create report using them. For reference, here is a snapshot of some of the data sources that you can connect to.

As you can see, the users can connect to Excel, XML, JSON, SQL Server, Oracle database, Azure Data Sources, and much more. You can also search for the data source you are looking for, since this window scrolls down to a lot of options. This goes to show that Power BI Desktop is compatible with majority of the data sources present out there.
Once the data source is connected, you can either start developing the report, or you can transform the data by using a built-in tool called Power Query Editor. This tool is built-in to the Power BI Desktop App and is one of its most important parts, since it allows you to clean and transform the data. Power Query Editor performs all the data processing using a language called M-query. Here is a brief overview of the UI of Power Query Editor. We won’t be going into the details of the tool, but this tool offers a lot of features that can be useful to your requirements.

Power Query Editor performs standard transformations, like changing data types, performing arithmetic/string/date operations, joins, group by, handling missing values, and much more. You can also implement Machine Learning and AI techniques on your data to generate various insights. On top of all of this, you can even write Python or R scripts on your data set to handle various issues that might not easily be done with the standard options available on Power Query Editor. All of these options can be used with the help of few clicks (Of course, Python and R would require script writing), and Power Query Editor automatically translates those transformations into equivalent M-Query Code. It also allows you to edit the M-Query, but usually that is done by more advanced users who are comfortable with the tool.
As you finish the data processing in Power Query Editor, you can load your data back to Power BI Desktop, where you model the data, and ultimately, create the report which you can share with the stakeholders. Once you are inside Power BI Desktop, you can model the data, by which we mean that we create relations between tables using common fields, and create Measures (which are functions that return scalar values), Calculated Columns, and Calculated Tables by using a language called DAX (Data Analysis Expressions). DAX language is used to perform various calculations within the report and can become quite complex based on the requirement you’re trying to fulfill. Unlike M-Query that we talked about earlier, DAX requires a lot of time and patience to become good at, as it is considered one of the hardest languages to master, since there are a lot of functionalities that it provides and based on the scenario, you are learning something new every time. But you should not worry too much about ‘mastering’ DAX, because naturally you become good at it over time and can navigate through problem relatively easily (Googling problems helps quite a lot though).
After modeling the data, you can start creating visuals inside the report canvas. You can use features such as bookmarks, tooltips, drill-down, drill-through in your report to make the experience more interactive. Just like QuickSight, you can also add multiple pages/sheets where you can group together a bunch of visuals that represent a certain analysis. You can format the visuals based on specification, use built-in machine learning and AI techniques, as well as create visuals with the help of R and Python. Once the report is created and ready to be shared, you can publish it to Power BI Service where you can collaborate with your colleagues (such as Quality Assurance Engineers, other developers) to finalize the report, and share with the actual consumers – the stakeholders.

To summarize the experience of Power BI, it is definitely a bit complex compared to AWS QuickSight and takes some time to getting used to. Overall, it is a great solution if you want to build detailed reports.

Pros and Cons

Now that we have highlighted some key features of Power BI, let’s talk about the advantages and disadvantages of using the software. Let’s first look at some of the pros.

  • Availability / Affordability – Power BI in general is quite affordable to use. The desktop app is free for everyone, so if you want to learn about the tool and work on projects, you can easily do so without paying anything. However, if you want to use Power BI Service and all its features, you would need to purchase a pro license at minimum, starting at $13.70. For more details on pricing, please visit the mentioned link.
  • Abundant Data Sources – Power BI connects to a variety of data sources, and has great integration with Microsoft’s proprietary services such as Excel, SQL Server. If you ever come across a data source that you’ve not heard of, and want to see if it is compatible with Power BI, there is a good chance it will be available in the list.
  • Monthly Updates – The good thing about Power BI is that it is updated every single month. The developers over at Microsoft are constantly adding or improving features all the time, so over time Power BI has become a refined product.
  • Mainstream BI Tool – Since Power BI is one of the leading BI solutions out in the market, the support in the community is quite impressive. If you ever run in to a problem, there is a good chance other have come across it too, and you can easily google the problems to find their solutions. On top of this, there are tons of resources online from where you can learn about the tool. You can watch videos on YouTube, read articles, study courses on various websites to learn more about the tool. Personally, I use SQLBI and DAX Guide quite a lot, and also YouTube Channels such as Curbal to learn more about the features within Power BI. This tool has also become a major requirement in the market if you want to become a Data Analyst / BI Developer. So, if you know how to work on the tool, it would definitely help you stand-out in the interview process.
  • Custom Visuals – One key advantage of using Power BI Desktop is its ability to use Custom visuals. If your requirement cannot be fulfilled with the default visuals, you can always visit the store and search for the visual type you’re looking for. Although I would mention here that it takes time to search for a particular visual, it’s still safe to say that the tool has a variety of visuals to choose from.

Here are some of the cons of using Power BI. Of course, there are minute details, but these are the major issues I can think of:

  • Takes Time to Learn – As Power BI comes with a lot of features, it definitely takes some time to become good at it. You are constantly learning new things as you encounter various situations. Power BI is not just a simple drag-and-drop type of BI tool; it comes with a complete suite of products and features. So, learning most of the things comes with experience and effort.
  • Performance – One thing I have noticed over time with Power BI is the performance. With big data, your reports can become quite slow. In order to reduce the issue, you have to optimize the reports by applying various techniques. Applying the optimization techniques requires knowledge and experience, so if you’re new to the BI domain, optimization can become a major hurdle and you can end up with a report that the end-user can’t even see.

Power BI vs QuickSight: The Comparison

Now that we have highlighted the key features, as well as pros and cons of using the two BI solutions, we are ready to make a brief comparison between the two. To make things simple, I have created this table which highlights the major differences between them.

Summary

The two BI solutions that we have discussed here have their unique features and target markets. Both have their ups and downs, and if I have to pick a tool, I would definitely choose Power BI because of the reasons I have highlighted in the article. Power BI definitely fits the need of majority of the users, but you can always use QuickSight if your reporting is simple and you do not require all the features that Power BI provides. We do, however, hope that QuickSight covers up for the lost time and catches up to its competitors by adding new features consistently over time, so that it challenges the top contenders and takes a fair share of the market.

Using Typescript with React Native

150 150 DevGate

Author: Shaban Qamar,

We all love JavaScript as it is the common language to build react native apps. But some of us also love types. Luckily, options exist to add stronger types to JavaScript. Our favorite is Typescript, but React Native Supports Flow out of the box. Today, we’re going to look at how to use Typescript in React Native apps.

Commands which are used

JavaScript:
To create react-native with JavaScript we use this command:
npx react-native init

TypeScript:
To create react-native app with typescript we use this command:
npx react-native init –template react-native-template-typescript
However, there are some limitations to Babel’s TypeScript support

Prerequisites

Since you might be developing on one of several different platforms, targeting several different types of devices, basic setup can be involved. You should first ensure that you can run a plain React Native app without Typescript. When you’ve managed to deploy to a device or emulator, you’ll be ready to start a Typescript React Native app.
You will also need Node.js, npm, and Yarn.

Initializing

Once you created the basic React Native project, you’ll be ready to start adding TypeScript. Let’s go ahead and do that.

Adding TypeScript

The next step is to add TypeScript to your project. The following commands will:

  • add TypeScript to your project
  • add React Native TypeScript Transformer to your project
  • initialize an empty TypeScript config file, which we’ll configure next
  • add an empty React Native TypeScript Transformer config file, which we’ll configure next
  • adds typings for React and React Native

The tsconfig.json file contains all the settings for the TypeScript compiler. The defaults created by the command above are mostly fine, but open the file and uncomment the following line:
{
/* Search the config file for the following line and uncomment it. */
// “allowSyntheticDefaultImports”: true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */
}

The rn-cli.config.js contains the settings for the React Native TypeScript Transformer. Open it and add the following:
module.exports = {
getTransformModulePath() {
return require.resolve(‘react-native-typescript-transformer’);
},
getSourceExts() {
return [‘ts’, ‘tsx’];
}
};

Rename the generated App.js and App.js files to App.tsx. index.js needs to use the .js extension. All new files should use the .tsx extension (or .ts if the file doesn’t contain any JSX).

If you tried to run the app now, you’d get an error like object prototype may only be an object or null. This is caused by a failure to import the default export from React as well as a named export on the same line. Open App.tsx and modify the import at the top of the file:

import React, { Component } from ‘react’;
import React from ‘react’
import { Component } from ‘react’;

Adding TypeScript Testing Infrastructure

React Native ships with Jest, so for testing a React Native app with TypeScript, we’ll want to add ts-jest to our devDependencies.
Then, we’ll open up our package.json and replace the jest field with the following:
{
“jest”: {
“preset”: “react-native”,
“moduleFileExtensions”: [
“ts”,
“tsx”,
“js”
],
“transform”: {
“^.+\\.(js)$”: “/node_modules/babel-jest”,
“\\.(ts|tsx)$”: “/node_modules/ts-jest/preprocessor.js”
},
“testRegex”: “(/__tests__/.*|\\.(test|spec))\\.(ts|tsx|js)$”,
“testPathIgnorePatterns”: [
“\\.snap$”,
“/node_modules/”
],
“cacheDirectory”: “.jest/cache”
}
}
This will configure Jest to run .ts and .tsx files with ts-jest.
Ignoring More Files

Installing Dependency Type Declarations

To get the best experience in TypeScript, we want the type-checker to understand the shape and API of our dependencies. Some libraries will publish their packages with .d.ts files (type declaration/type definition files), which can describe the shape of the underlying JavaScript. For other libraries, we’ll need to explicitly install the appropriate package in the @types/ npm scope.

For example, here we’ll need types for Jest, React, and React Native, and React Test Renderer.

yarn add –dev @types/jest @types/react @types/react-native @types/react-test-renderer

We saved these declaration file packages to our dev dependencies because this is a React Native app that only uses these dependencies during development and not during runtime. If we were publishing a library to NPM, we might have to add some of these type dependencies as regular dependencies.

Ignoring More Files

For your source control, you’ll want to start ignoring the .jest folder. If you’re using git, we can just add entries to our .gitignore file.

# Jest
#
.jest/

As a checkpoint, consider committing your files into version control.
git init
git add .gitignore # import to do this first, to ignore our files
git add .
git commit -am “Initial commit.”

After adding and doing all the steps told above you are good to go you can create screens and components just like you do in JavaScript but remember you are no longer working on JavaScript its typescript so you have to work according to the environment you just have set up.
To run the project just type the following command:

For Android:

npx react-native run-android

For IOS:

npx react-native run-ios

GitHub Branching Strategy

150 150 DevGate

Author: Muhammad Raza Saeed,

There is already a lot of contention and debate around using Git Flow vs GitHub Flow branching model since there are trade-offs to using either. This is a concise summary.

  • For teams who must make formal releases on a longer time scale (a few weeks to a few months between releases) and be able to perform hotfixes, maintenance branches and other things that emerge from shipping so infrequently, git-flow makes sense.
  • It is advisable to choose something simpler, like GitHub Flow, for organizations who have established a culture of shipping, push to production frequently (if not daily), and are constantly testing and deploying.

However, comparing Git Flow vs GitHub Flow is not the goal of this report. The purpose of this is to promote the use of the most straightforward Branching Model that will work for all potential project teams. The “Branching Strategy” and the GitHub “Workflows” across projects need to be standardized immediately utilizing the “Perspective” method. Utilizing the Git Flow model is recommended.

Key Benefits of Git-Flow Branching Model:

  • Parallel Development:
    GitFlow is useful because it isolates new development from completed work, which makes parallel development simple. Future branches are used to work on new features and non-emergency bug fixes, and they are only merged back into the main body of code if the developer is satisfied that it is ready for release.
    Despite the possibility of interruptions, all you must do to go from one task to another is commit your modifications and then make a new feature branch for it. Check out your original feature branch once the task is complete to pick up where you left off.
  • Collaboration:
    Feature branches also make it simpler for two or more developers to work together on the same feature because each feature branch only contains the changes required to make the new feature functional, that makes it very simple to view and understand what each collaborator is doing.
  • Release Staging Area:
    As the new development is completed, it gets merged back into the develop branch, which is the staging area for all completed features that haven’t yet been released. So when the next release is branched off to develop, it will automatically contain all of the new stuff that has been finished.
  • Support For Emergency Fixes:
    GitFlow supports hotfix branches branches made from a tagged release (or master branch). You can use these to make an emergency change, safe in the knowledge that the hotfix will only contain your emergency fix. There’s no risk that you’ll accidentally merge a new development at the same time.

Branches Explained:

Main  Branches:

  • Master – this branch will confirm stable code running in production. Projects should consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.
  • Develop – this is often referred to as the “integration branch”. It is also the starting point of the feature. When the source code in the origin/develop branch reaches a stable point and is ready to be released, projects should create a release branch.

Supporting Branches:

  • Feature – every time there’s a new feature to be implemented, a new branch needs to be created following this pattern feature/<Jira_storyID>-<summary>. Must merge back into the develop branch.
  • Release – ideally this branch should be used for UAT releases. The key moment to branch off a new release branch from develop is when the develop branch reflects the desired state of the new release. Should be merged to the master branch once a release (or UAT) is complete and all UAT fixes should be backward merged into develop.
  • Bugfix – a bugfix branch should be used for fixing UAT bugs. They are branches from release branches and once the UAT bug is fixed, change is merged back to the release branch.
  • Hotfix – a hotfix branch is a lot like release branches and feature branches except they are branched from master instead of develop. When a critical bug in a production version must be resolved immediately, a hotfix branch needs to be branched off from the corresponding tag on the master branch that marks the current production version.

Supporting Branches – Prefix Conventions:

  • Feature -> feature/**
  • Release -> release/**
  • Hotfix -> hotfix/**
  • Bugfix -> bugfix/**

Conventions of Git-Flow Approach:

  • Using short lived branches.
  • When a feature is completed a Pull Request is created to merge with develop branch. This allows code review and integration tests to be verified before merging.
  • The new feature to be developed needs to follow similar syntax like feature/<Jira-storyID>-<summary>
  • The hotfix to be developed needs to follow the syntax like hotfix/<IssueID>-<summary>
  • The release to be created needs to follow the syntax like release/<version>
  • The develop branch is the main developer’s integration branch.
  • The master branch always reflects the current code from production.

SDLC Overview:

Development Phase:
New development (new feature, sprint bugs) is built into feature branches. Feature branches are branched off from develop branch, and finished features and fixes are merged back into the develop branch once ready.

UAT and GO Live Phase:
When it is time to make a release, a release branch is created from develop. The code in the release branch is deployed onto UAT. UAT bugs are recreated and fixed in the bugfix branch. This deploy -> test -> fix -> redeploy -> retest cycle continues until customer is happy that release is good enough to release into production for end users.
When the release is finished, the release branch is merged into master. And backward merged into develop to make sure that any changes made in the release branch aren’t accidentally lost by new development.

Post GO Live Phase:
The master branch has the production code. Therefore, it is important to tag the master branch with version of the production release. The only commits to master are merges from release branches and hotfix branches.

KMM as a New Approach to Cross-platform App Development

150 150 DevGate

What is KMM?

KMM stands for Kotlin Multi-platform Mobile and it’s a new way to develop mobile apps. It’s a combination of both native and cross-platform approaches. Within this method, we can write the common server logic for several platforms. By server logic we mean not the backend development but the “server layer”. It is a part of the application that exchanges data between the app on the phone and the server.

At the same time, the UI part will be separate for each platform. For example, there will be one code for iOS and a different one for Android. For more detailed information, you can check the official website.

Advantages

  • In KMM, they write server logic using Kotlin. Kotlin is originally an Android app development programming language. The problem here is that not a lot of iOS developers know this language to use it in app development. So it could not be very easy to find a team that can handle this approach.
  • The novelty of the framework. KMM is still in the beta stage. That means that no one can guarantee its stability. So if you make a decision to create your mobile apps using KMM, you will probably need a maintenance team in case of any errors that may appear. The good news here is that the Kotlin team has promised to release the alpha this year.

Disadvantages

The way payroll loans work varies by loan type. For example, if you’re using a merchant cash advance to cover your payroll expenses, you might repay your loan in daily increments. That’s a very different model than if you were to use SBA 7(a) loans for payroll financing where you’ll repay your loan in predictable monthly installments over a longer period of time.

Another thing to consider: With payroll loans, you might need funding sooner than later to pay your team. If that’s the case, your options might be limited to loans that offer rapid funding – and these loans often have terms that are unfavorable. You might get saddled with high interest rates and short repayment periods, resulting in higher monthly payments compounded by those high interest rates. If possible, it’s generally best to get a borrower-favorable loan, even if you have to wait longer for your funding.

Examples

Despite the fact that KMM appeared quite recently, it already has many fans among the well-known brands. For instance, it helps Netflix optimize speed and product reliability. Leroy Merlin uses KMM in their mobile app. Among the KMM users, you can also find such giants as Philips, Cash App, VMWare, Quizlet, Autodesk, and many others.

If you are thinking about cross-platform app development services for your idea, you really should consider KMM.