Mastering the Art of Avoiding ANRs in Android Applications

150 150 DevGate

Author: Muhammad Raza Saeed,

Android applications have revolutionized the way we interact with mobile devices. However, a common problem faced by Android developers is the dreaded Application Not Responding (ANR) error, which can lead to a frustrating user experience and potentially negative reviews. ANRs occur when the main thread of an application is blocked for too long, resulting in unresponsiveness. In this article, we will explore effective strategies, examples, and best practices to avoid ANRs and ensure smooth and uninterrupted user experiences in Android applications.

Understand the Android Application Lifecycle: To avoid ANRs, it is essential to have a clear understanding of the Android application lifecycle. Familiarize yourself with key components such as activities, services, and broadcast receivers. Ensure that time-consuming tasks are offloaded from the main thread to background threads or services, leaving the main thread available to handle user interactions promptly.

Best Practice:

Use Intent Service for long-running operations that don’t require user interaction, such as uploading files to a server. Intent Service automatically handles worker threads and stops itself when the work is complete.


Optimize UI Rendering:

UI rendering plays a vital role in maintaining a responsive application. Here are some optimization techniques to consider:

Best Practices:

a. Use the Layout Inspector and Hierarchy Viewer tools provided by Android Studio to identify any rendering bottlenecks or excessive view hierarchies that might hinder performance.

b. Employ lightweight UI components and avoid nested layouts whenever possible.

c. Optimize resource usage, such as using appropriate image sizes, minimizing overdraw, and reducing the complexity of vector graphics.

d. Implement view recycling and lazy loading techniques for lists and grids to avoid rendering excessive UI elements at once.


Asynchronous Task Execution:

To prevent long-running operations from blocking the main thread, use asynchronous task execution mechanisms such as:

Best Practices:

a. AsyncTask: Deprecated as of Android 11 but can still be used for earlier versions of Android.

b. Handlers and Looper: Employ Handler and Looper classes to execute tasks on dedicated worker threads.

c. Executors: Utilize thread pools with Executors to manage background tasks effectively.

d. Kotlin Coroutines and RxJava: Leverage these libraries to simplify asynchronous programming and handle background operations efficiently.


Proper Network and Database Operations:

Network and database operations are often time-consuming and can lead to ANRs if not handled correctly. Follow these best practices:

Best Practices:

a. Perform network operations on a separate thread using libraries like Volley, OkHttp, or Retrofit.

b. Utilize Content Providers or Room Persistence Library for database operations, as they handle threading and asynchronous tasks efficiently.

c. Implement proper caching mechanisms to minimize unnecessary network or database queries.

d. Use background services for long-running tasks such as file downloads, uploads

How to Publish React Native App to Google Play Store

150 150 DevGate

Author: Haider Ali,

For creating cross-platform mobile applications, React Native is a well-liked framework. The next step after developing your React Native app is to publish it on app stores like the Google Play Store. Registering as a Google Play developer, getting the app ready for release, making a release build, and submitting the app for review are all necessary before an app can be published on the Google Play Store.

The processes necessary to publish your React Native app on the Google Play Store are outlined in this guide. Regardless of whether you are an experienced developer or new to the field of mobile app development, this article will give you a clear and comprehensive overview of the steps necessary to have your app accepted by the market.

Your Android application must be signed with a release key before you can distribute it through the Google Play store.You should save this Key because this key must also be used for all upcoming updates. Since 2017, Google Play has been able to manage signing releases automatically thanks to the feature of App Signing by Google Play. But, your application binary needs to be signed with an upload key before it is submitted to Google Play. The problem is thoroughly covered on the Signing Your Apps page of the documentation for Android developers. The steps necessary to package the JavaScript bundle are listed in this guide along with a quick overview of the procedure.

Generating an upload key

With keytool, you can create a private signing key.

For Windows:

For Windows, keytool must be executed as administrator from

After running this command, you will be asked for password for the keystore and key and Name fields for your key. The key store is then created as a file with the name “my-upload-key.keystore.”

There is just one key in the keystore, and it is good for 10,000 days. Remember to write down the alias because you will need it later when signing your app.

For macOS:

If you’re not sure where your JDK bin folder is on macOS, run the following command:

The JDK directory that is produced by this command will look something like this:

Use the cd command to get to that directory, then run the keytool command with sudo access as following:

Setting up Gradle File

  • Put the my-upload-key.keystore file in your project folder’s android/app directory.
  • Add the following (replacing ***** with the right keystore password, alias, and key password) to the file /.gradle/gradle.properties or android/gradle.properties

These are going to be global Gradle variables, which we can later use in our Gradle config to sign our app.

Adding signing config to your app's Gradle config

The final configuration step is to set up release builds to be uploaded key-signed. In your project folder, edit the file android/app/build.gradle and add the following signing config.

Generating the release apk

After the configuration if you want to make release apk run the following command:

And if you encounter any error then try this command:

Generating the release AAB

After the configuration if you want to make release aab run the following command:

The created AAB, which is ready to be posted to Google Play, may be located under android/app/build/outputs/bundle/release/app-release.aab.

Enabling Proguard to reduce the size of the APK (optional)

Proguard is a tool that can slightly reduce the size of the APK.

To enable Proguard, edit android/app/build.gradle:

* Run Proguard to shrink the Java bytecode in release builds.

RESTful API Design: Best Practices and Tips

150 150 DevGate

Author: Muhammad Ehtasham,

As an ever-increasing number of applications move to the cloud and the requirement for web services keeps on developing, designing a RESTful API has turned into a fundamental expertise for engineers. A very much planned API can make it more straightforward for clients to connect with your application and can work on the general execution and security of your framework. In this article, we’ll explore probably the best practices and ways to plan a RESTful API.

What is a RESTful API?

Representational State Transfer, or REST, is an architectural approach to software development that outlines a number of guidelines for developing web services. A RESTful API is one that complies with these restrictions and is made to be straightforward and user-friendly. It communicates with resources using HTTP methods including GET, POST, PUT, and DELETE, and it employs HTTP status codes to let the user know how the request is progressing.

Best Practices for RESTful API Design

Resource Names Should Be Nouns: Resources are the foundation of a RESTful API. Instead of using verbs, name your resources with nouns. For example, use /users instead of /getUsers.

  • Use HTTP Methods Properly: Use the correct HTTP method for the kind of activity you want to carry out on a resource. Use GET to obtain resources, POST to add new resources, PUT to update already existing resources, and DELETE to remove resources.
  • Utilize HTTP Status Codes: To describe the request’s status, use HTTP status codes. Use 200 for a successful answer, 404 for a resource that cannot be located, and 500 for a server fault, for instance.
  • Consistent Resource Naming: Keep your API’s resource naming consistent. Use /users/id to get a specific user, for instance, if /users returns a list of users.
  • Use Pagination to Limit Huge Data Sets: Pagination can be used to control how much data is returned in response to a single request. Huge data sets can be managed more easily, which could improve the performance of your API.
  • Use Query Parameters for Filtering: To filter the information your API delivers, use query parameters. Customers may find it easier to access the information they need and the amount of data provided in response to a single request may decrease as a result.Links to pertinent websites should be provided using HATEOAS (Hypermedia as the Engine of Application State). Customers might discover new resources and find it easier to utilize your API as a result.
  • Use Versioning: Manage API modifications using versioning. By doing so, breaking changes can be avoided, and maintaining backward compatibility will be simpler.Employ authentication and authorization to control access to your API. This can enhance your system’s security and stop illegal access.

Tips for RESTful API Design

  • Make It Simple: Maintain a straightforward, user-friendly API. Maintain simplicity, and make sure your documentation is unambiguous and understandable.
  • Design for Scale: While developing your API, consider scaling it. Consider how a lot of traffic and enormous data will be managed.
  • Design for Performance: Make an API that loads quickly. Think about how it will respond to queries as you make it faster.
  • Implement error handling: Provide error handling for the delivery of clear and instructive error messages. Clients may be able to comprehend and address problems with their requests as a result.
  • Utilize Caching: Employ caching to enhance your API’s performance. By doing this, you can lower the volume of queries sent to your system and speed up response times.


Although creating a scalable, secure, and user-friendly RESTful API can be challenging, it is possible if you follow best practices and recommendations. By using the appropriate HTTP methods, status codes, and resource names, you can create an API that is understandable and straightforward to use. By using HATEOAS, query parameters, and pagination, you can make it easier for users to access your API. Versioning, authentication, and authorization can improve your system’s security and maintainability.
When designing a RESTful API, it’s critical to keep the ideas of performance, scalability, and simplicity in mind. By building with these concepts in mind, you may create an API that fits the needs of both your system and your clients.
In addition to these recommendations and best practices, there are additional considerations to be made while developing a RESTful API, such as API documentation, testing, and monitoring. You can be sure that your API is working correctly and meeting client requests while also guaranteeing that developers can use it efficiently when it is tested and monitored.
Creating a RESTful API is a crucial skill for developers using web services, to sum up. You may design an API that is straightforward, user-friendly, scalable, and safe by adhering to best practices and recommendations for resource names, HTTP methods, HTTP status codes, pagination, filtering, HATEOAS, and versioning.

How to build a new ReactJS app and Why ReactJS is a popular choice for web development?

150 150 DevGate

Author: M. Ahmed Fraz,

ReactJS is a JavaScript library that is used to build user interfaces. ReactJS is well-known for its speed, scalability, and simplicity, making it an excellent and ideal choice for developing large-scale web applications. It is one of the most popular front-end frameworks

Why Use ReactJS?

ReactJS has several benefits that make it a popular choice for web development nowadays. Following are a few of the key advantages and features of using ReactJS:

Reusable Components:

Reusable components are an important element of ReactJS. ReactJS components are generally small, self-contained pieces of code that can be used anywhere across the entire application, making it faster and more efficient in development. ReactJS allows developers to create reusable components that can be used across applications.

This means that each component should focus on a specific aspect of the UI and should be responsible for its own rendering and managing its state separately.

Developers can utilize ReactJS to develop reusable and scalable components that can be reused across numerous pages or applications. This reusable approach can save time and cuts down on the number of lines of code that needs to be written. Reusable components are a fundamental concept in ReactJS, and they are essential for building scalable, maintainable, and efficient applications

Virtual DOM:

ReactJS allows developers to make use of a virtual DOM, which is a lightweight representation of the actual DOM. This process speeds up the functionalities to update the user interface because just the elements of the DOM that have changed need to be updated. The Virtual DOM provides advantages such as faster rendering times, enhanced efficiency, and a better user experience.

The Virtual DOM reduces the amount of work that the browser has to do by minimizing the number of updates to the real DOM, which can improve the application’s overall speed and responsiveness. Besides that, the Virtual DOM can help prevent the occurrence of common issues such as layout thrashing, in which multiple changes to the real DOM cause unnecessary reflows and repaints.

One-way Data Binding:

ReactJS uses a one-way data binding approach, which is easier and more effective to manage the application state. One-way data binding means that data flows in a unidirectional modal, which is either from top to bottom or from the parent component to its child components and so on, and changes in a child component’s state are communicated to the parent component via callbacks.

Declarative Programming:

ReactJS utilizes declarative programming, which means that the developers simply need to declare what they want the application to do, and ReactJS will handle the details. This approach to writing code makes writing clean, maintainable, and reusable code. ReactJS allows developers to describe how the user interface of the application should look based on the current application state.

Declarative Programming is more accessible and user-friendly than the typical imperative approach, which requires developers to declare how the front end should update based on changes in the application state.

Large Community:

ReactJS has a strong and active developer community that contributes to the framework and helps other developers. The ReactJS developer community is a large group of programmers who are passionate about producing applications with the ReactJS library. Facebook and a community of individual developers and corporations cooperate to maintain ReactJS, an open-source JavaScript library for designing user interfaces.

Because ReactJS has a large and active developer community, there are numerous resources available for learning and debugging. There are also numerous third-party libraries and tools that integrate with ReactJS, such as Redux for state management and Next. js is used for server-side rendering.

Getting Started with ReactJS:

To get started with ReactJS, all you need to have a very basic understanding of HTML, CSS, and JavaScript. If you are new and not an expert in this field, It is okay. You don’t need to be an expert in that field to get started. Once you have a basic understanding of the above mentioned skills, you can follow these steps to get started with ReactJS and build your own website in no time:

Set Up Your Development Environment:

To develop ReactJS applications, Node.js and npm (Node Package Manager) must be installed on your computer in order to develop ReactJS applications which are mandatory for the environment. Node.js can be downloaded and installed by using the official website (https://nodejs.org/en/), and you don’t need to install npm separately, it will be installed automatically along with the installation of Node.js.

Create a New ReactJS Project:

Once Node.js and npm are installed on your system, you are good to go, you can use the create-react-app command to start a new ReactJS project, which is the most popular and easiest way to create a new react project. This will automatically create a folder of a new project with all of the required files and their dependencies.

Create a Component:

The next step is to create a component file for the interface and functionalities. For the instance, A component is a piece of reusable code that can be used on various sites or anywhere in the applications. To create a component, open a new JavaScript file and define the component. As simple as that.

Render the Component:

You can render the component in the browser after you have created it by adding and importing it into the main App.js file.

Add Interactivity:

State and props can be used to add interactivity to your ReactJS application. The state is used to manage data within a component or in any other component across the application, whereas props are used to pass data from one component to another component.


ReactJS is a powerful and popular front-end framework that makes it easy to build large-scale web applications. Its speed, scalability, and ease of use make it an ideal choice for developers. Getting started with ReactJS is easy, and with a little practice, you can start building your own applications in no time. If you’re new to ReactJS, start with the basics and gradually work your way up to more advanced topics like routing, hooks, and performance optimization.

Overall, ReactJS is a robust and effective library for creating user interfaces, with numerous advantages for both developers and end users. This is why it is more popular nowadays.

Introduction to Docker Containers

150 150 DevGate

Author: Qamar Khurshid,

Docker containers are a popular and efficient way to package and deploy applications, and the Docker command-line interface (CLI) provides a convenient way to manage and deploy containers. In this blog post, we’ll take a closer look at the Docker CLI and some of its basic commands, and explain how to use them to deploy Docker containers.

The Docker CLI is a tool that allows users to interact with Docker from the command line, and provides a wide range of commands for managing and deploying Docker containers. Some of the most commonly used Docker CLI commands include:

docker run: This command is used to run a Docker container. It takes a Docker image as input, and creates a new container based on that image.

docker ps: This command lists all running Docker containers on the host machine.

docker stop: This command stops a running Docker container. It takes the container’s name or ID as input.

docker rm: This command removes a stopped Docker container. It takes the container’s name or ID as input.

docker build: This command is used to build a Docker image from a Dockerfile. A Dockerfile is a text file that contains the instructions for building a Docker image.

To deploy a Docker container, you first need to create a Docker image. This can be done using the docker build command, which takes a Dockerfile as input and produces a Docker image as output. Once you have a Docker image, you can use the docker run command to create a new container based on that image, and then start the container using the docker start command.

For example, let’s say you have a simple Node.js application that you want to deploy as a Docker container. First, you would create a Dockerfile that specifies the instructions for building a Docker image for the application. This might look something like this:

Next, you can use the docker build command to build a Docker image from the Dockerfile:

docker build -t my-node-app .

This will create a Docker image named my-node-app based on the instructions in the Dockerfile. Once you have the Docker image, you can use the docker run command to create and start a Docker container based on the image:

docker run -d -p 3000:3000 –name my-node-app my-node-app

This will create a new Docker container named my-node-app, and start it in detached mode (-d). It will also map port 3000 on the host machine to port 3000 on the container (-p 3000:3000), which will allow you to access the application from the host machine.

To verify that the container is running, you can use the docker ps command, which will list all running Docker containers on the host machine:

docker ps

This will show the running Docker containers, along with their names

Lazy Loading with React

150 150 DevGate

Author: Muhammad Fraz,

The world of front-end development is constantly evolving, and people are creating more and more complex and powerful applications every day. Naturally, this led to massive code bundles that can drastically increase app load times and negatively impact the user experience. This is where lazy loading comes in.

What is Lazy Loading?

Lazy loading is a design pattern for optimizing web and mobile apps.

At the point when we launch a React web application, it normally packages the whole application immediately, stacking everything including the whole web application pages, pictures, content, and considerably more for us, possibly bringing about a sluggish burden time and overall poor performance, depending on the size of the content and the internet bandwidth at the time.

In earlier versions of React, lazy loading was implemented using third-party libraries. However, React JS introduced two new native functions to implement lazy loading with React v16.6 update.

In this tutorial, we’ll show you how lazy loading works in React.js, demonstrate how to use code splitting and lazy loading with React.lazy and React.Suspense, and create a React demo app to see these concepts in action.

The Benefits of lazy loading

The essential benefits of languid stacking are execution related:

  • Fast initial loading: By decreasing the page weight, lethargic stacking a site page considers a quicker starting page load time.
  • Less bandwidth consumption: Lazy-loaded images save information and transfer speed, especially valuable for people who don’t have fast internet.
  • Decreased work for the browser: When pictures are lazy-loaded, your browser does not need to process or decode them until they are requested by scrolling the page.

React.lazy() is a function that empowers you to ender a dynamic import as a regular component. Dynamic imports are a method of code-splitting. It takes out the need to utilize an third party library, for example, react-loadable, react-waypoint

// without React.lazy()
import NewComponent from ‘./NewComponent ‘;

const MyComponent = () => (


// with React.lazy()
const NewComponent = React.lazy(() => import(‘./NewComponent));

const MyComponent = () => (


React.Suspense enables you to determine the loading indicator in the event that the components in the tree below it are not yet ready to render.
When all the lazy components are loaded, other React elements can be shown as placeholder content by passing a fallback prop to the suspense component. it allows you to define the loading indicator if the components in the tree below it are not yet ready to render.

import React, { Suspense } from “react”;

const LazyComponent = React.lazy(() => import(‘./NewComponent)); const LazyComponent1 = React.lazy(() => import(‘./NewComponent1));

const MyComponent = ( ) => (



{* Here you can add more lazy components.. *}


The Disadvantages of lazy loading

As already mentioned, lazy loading has many advantages. However, overuse can have a significant negative impact on your application. Therefore, it’s important to understand when you should and when not to use lazy loading. The disadvantages are mentioned below

  • Not suitable for small-scale applications.
  • Requires additional communication with the server to fetch resources.
  • Can affect SEO and ranking.

AWS QuickSight vs Microsoft Power BI

150 150 DevGate

Author: Muhammad Zaki Khurshid,

Business Intelligence – BI tools such as AWS QuickSight, Tableau, Power BI, IBM Cognos (and many more) are designed to assist companies in generating business insights with the help of visuals. Since BI market is highly competitive, therefore, the companies that have provided these solutions have added distinct features in order to target a certain customer base that might use those features in their business requirements.
In this article, we shall make a brief comparison between the two Business Intelligence solutions: AWS QuickSight and Microsoft Power BI. We will first talk about the two technologies separately, highlighting the key features that each tool provides, and also the pros and cons of using the two.

AWS QuickSight

AWS QuickSight is a cloud-based BI solution (which runs on Amazon Web Services Platform) that you can use to build visuals, perform ad-hoc analysis, generate business insights, and share the results with others. AWS QuickSight connects to a variety of data sources, including AWS data (S3, Athena, Redshift etc.), third-party data, spreadsheet data, and more. QuickSight processes the data through SPICE, which stands for Super-fast, Parallel, In-memory Calculation Engine. Amazon claims that it is a robust in-memory engine that performs advanced calculations and serve data. If you want to create a dataset in QuickSight, you can either import it into SPICE, or perform a direct query (which is a way of directly querying the data instead of importing it into the tool). It is recommended to use SPICE to load the data so that QuickSight would be able to access it quickly and efficiently. On the other hand, direct query accesses the data by querying the source data directly, but this method is considered inefficient in QuickSight because data is queried every time a change is made in the analysis.


Here is a high-level architecture of QuickSight. As a summary, Data source connects to SPICE, which loads and processes (data cleaning, transformations etc.) the data. This data is then fed into QuickSight for data visualization.

Key Features

The visual presentation of QuickSight is one of its key selling points. Although the quantity of visual types might not be there, but for QuickSight, it’s about how it appeals to the end-user. Following are some of the major components/key features in AWS QuickSight:

  • Visuals – These are the components you use to represent your data in the form of visuals. You have bar charts, box plots, combo charts, heat maps, KPIs, line charts, and many more visual types that you can use to create meaningful reports.
  • Insights – As the name suggests, this feature allows you to generate insights with the help of built-in machine learning algorithms. This feature is quite useful because it allows you to interpret your data in a way that might add value to your analysis.
  • Sheets – These are like separate pages that you see in Power BI, where you can keep a group of visuals on a single page. You can have one sheet showing visuals that represent the sales of a company, and another page showing visuals related to inventory analysis.
  • Simplicity – Although this is not a proper ‘feature’, it is something certainly important for this BI tool to appeal to the market. Even people without much technical knowledge can easily explore data and extract valuable insights because of the simplistic and intuitive nature of the tool. You are most of the times performing simple operations (related to arithmetic, string, dates etc.) on the data, data type changes, and dragging and dropping fields on to the visuals.
  • Speed – Speed is a major selling point for QuickSight, due to its SPICE Engine.

Using all of the features provided by QuickSight, users can create meaningful, beautiful, and interactive reports to assist stakeholders in various business areas, such as:

  • Marketing.
  • Finance.
  • Sales.

Pros and Cons

Just like any other tool, QuickSight has its pros and cons. Here are some of the pros of using this tool.

  • Easy to Use – As mentioned earlier, QuickSight is very simple and intuitive to use. The users can configure and start using the tool in no time. It also takes less time to learn the tool, so if this is your first BI tool, working on the data and creating visuals would seem very easy.
  • Everything is on the Cloud – As QuickSight runs on the AWS platform, you don’t really need to set up anything on your system. You would just need a working AWS account, a subscription, and a network connection so that you can easily access the tool via web. Even if you have a low-end system, Quicksight would run flawlessly since everything is hosted on the AWS cloud platform. You can also access the tool via Android or IOS device, since the integration on mobile devices is also excellent, and would allow the users to view the content in a seamless manner.
  • Quality of the Visuals – QuickSight has some stunning visual types in its collection. Although they are limited in quantity, they can certainly make a huge difference in visual presentation.
  • Pricing – QuickSights pricing is pretty optimal for an average user, compared to other platforms. For more information on pricing, please visit the link here.
  • Speed – This is one of the key features of using QuickSight. Because of the SPICE engine, data loading and processing becomes a great experience for all levels of users.

Now that we have highlighted the pros, here are some of the cons of using QuickSight.

  • Limited Visual Types – As mentioned earlier, QuickSight has quality visuals, but they are limited in quantity. So, if you need a visual that is not present in the collection, you might need to look for an alternative within the available visuals set.
  • Simplicity – The ease of use and simplicity was highlighted as an advantage, but it’s also one of its major disadvantages. Now this totally depends upon the use-case. If your reports require simple data connectivity, simple calculations, and visuals that only need fields to be dragged and dropped on to them, then QuickSight is a great choice. But for cases where we have to perform high level transformations, calculations, and complex reporting, this tool is not an optimal one. For complex reporting, there are tools such as Power BI, which we shall talk about in the next section.
  • Still New to the Scene – Quicksight is still pretty new in the BI market. So, this solution has to play catch-up with other competitors such as Tableau, Power BI, in terms of adding new features which would support complex data processing, reporting, or sharing, so that it appeals to the mass market and in particular, the big corporations.

Now that we have talked briefly about QuickSight, let’s take a look at its competitor: Microsoft Power BI.

Microsoft Power BI

As the name suggests, Power BI is a Business Intelligence software product created by Microsoft. It combines business analytics, data visualizations, and best practices that help an organization make decisions. Power BI is also one of the leading BI solutions in the market, and many have ranked it as the best BI solution out there. Although the ranking is quite subjective, still, it is fair to say that Power BI is considered as a mainstream solution in the BI domain.


Let’s talk about the high-level architecture of Power BI. To demonstrate this, here is a diagram showing the various components of Power BI. If you’re already familiar with Power BI, you should notice that I have excluded Power BI Report Server from the diagram. While it is also a part of this eco-system, the only major difference between Power BI Report Server and Power BI Service is that the former is on premise report sharing platform, whereas the latter is cloud-based report sharing platform. With that said, here is the diagram.

Let’s break down this architecture diagram. Usually, these are the components of a report in Power BI:

  • Data Source – Power BI connects to a variety of data sources and uses the data from them to create reports. It can connect to databases (SQL Server, Redshift, Oracle SQL Server etc.), spreadsheets, JSON, XML, Sharepoint Folder, and many more. If you want to read more about the compatible data sources, click the link here.
  • Power BI Desktop – This is the desktop application that is used by developers to ingest the data, process the data (data transformations and modelling), create visuals and then publish the report to the cloud (Power BI Service), or on-premises server (Power BI Report Server). Power BI Desktop is primarily used for developing the report, so you would find all the options here that could be used to create a report specific to your requirement. This application is only available on Windows, so if you are using MacOS, you might need to install a VM and run the app there.
  • Power BI Service – This is the cloud platform that allows you to share reports with the stakeholders. The developers create reports using Power BI Desktop, and then publish them to Power BI Service so that the end user can generate insights, in order for them to make data-driven decisions. Power BI Service has various features, that allow you to create workspaces, configure dashboards, workspace apps, dataflows, and much more. Also keep in mind that Power BI Service is not for developing BI reports, so if you have any major changes that you need to make in the existing report, it will be done on Power BI Desktop.
  • Browser & Mobile Apps – Once the report has been published to Power BI Service, users can easily view it using their web browser, or a dedicated app that is present on Android or IOS devices. To see the reports, users would need to have access to their accounts.

Creating and Sharing a Report in Power BI

To create a new visualizations report, you would need to use the Power BI Desktop app, because as we mentioned earlier, this app is primarily used for report development purposes. If you don’t have the app installed, you can easily download it from Microsoft’s website, or you can download it from the Microsoft Store. Personally, I prefer the store option since it allows for automatic updates on the app. Once you open the app, you are welcomed with a beautiful UI of Power BI Desktop, which looks like this.

As you can see, we have the blank canvas at the center where you place all of your visuals, on top we have various options where you connect to the data sources, go to the Power Query Editor (which we shall talk about later), add Measures/Calculated Columns, go to view tab, etc. On the right, you have standard visual types which you can drag to the canvas to create a visualization. Apart from the standard visuals, you also have the option to download custom visuals from the built-in store. These custom-visuals are made by developers from around the world and are either paid, or free. On the left, you have three different views which are report view , table view , and modeling view . The report view is primarily used for creating visualizations, the table view for looking at the loaded data, adding calculated columns or tables, changing data types, and modeling view for creating relations between tables, hiding certain fields/tables, etc.
Now let’s jump to the first key component of creating a report, which is connecting to a data source. In Power BI Desktop, you can connect to a variety of data sources and create report using them. For reference, here is a snapshot of some of the data sources that you can connect to.

As you can see, the users can connect to Excel, XML, JSON, SQL Server, Oracle database, Azure Data Sources, and much more. You can also search for the data source you are looking for, since this window scrolls down to a lot of options. This goes to show that Power BI Desktop is compatible with majority of the data sources present out there.
Once the data source is connected, you can either start developing the report, or you can transform the data by using a built-in tool called Power Query Editor. This tool is built-in to the Power BI Desktop App and is one of its most important parts, since it allows you to clean and transform the data. Power Query Editor performs all the data processing using a language called M-query. Here is a brief overview of the UI of Power Query Editor. We won’t be going into the details of the tool, but this tool offers a lot of features that can be useful to your requirements.

Power Query Editor performs standard transformations, like changing data types, performing arithmetic/string/date operations, joins, group by, handling missing values, and much more. You can also implement Machine Learning and AI techniques on your data to generate various insights. On top of all of this, you can even write Python or R scripts on your data set to handle various issues that might not easily be done with the standard options available on Power Query Editor. All of these options can be used with the help of few clicks (Of course, Python and R would require script writing), and Power Query Editor automatically translates those transformations into equivalent M-Query Code. It also allows you to edit the M-Query, but usually that is done by more advanced users who are comfortable with the tool.
As you finish the data processing in Power Query Editor, you can load your data back to Power BI Desktop, where you model the data, and ultimately, create the report which you can share with the stakeholders. Once you are inside Power BI Desktop, you can model the data, by which we mean that we create relations between tables using common fields, and create Measures (which are functions that return scalar values), Calculated Columns, and Calculated Tables by using a language called DAX (Data Analysis Expressions). DAX language is used to perform various calculations within the report and can become quite complex based on the requirement you’re trying to fulfill. Unlike M-Query that we talked about earlier, DAX requires a lot of time and patience to become good at, as it is considered one of the hardest languages to master, since there are a lot of functionalities that it provides and based on the scenario, you are learning something new every time. But you should not worry too much about ‘mastering’ DAX, because naturally you become good at it over time and can navigate through problem relatively easily (Googling problems helps quite a lot though).
After modeling the data, you can start creating visuals inside the report canvas. You can use features such as bookmarks, tooltips, drill-down, drill-through in your report to make the experience more interactive. Just like QuickSight, you can also add multiple pages/sheets where you can group together a bunch of visuals that represent a certain analysis. You can format the visuals based on specification, use built-in machine learning and AI techniques, as well as create visuals with the help of R and Python. Once the report is created and ready to be shared, you can publish it to Power BI Service where you can collaborate with your colleagues (such as Quality Assurance Engineers, other developers) to finalize the report, and share with the actual consumers – the stakeholders.

To summarize the experience of Power BI, it is definitely a bit complex compared to AWS QuickSight and takes some time to getting used to. Overall, it is a great solution if you want to build detailed reports.

Pros and Cons

Now that we have highlighted some key features of Power BI, let’s talk about the advantages and disadvantages of using the software. Let’s first look at some of the pros.

  • Availability / Affordability – Power BI in general is quite affordable to use. The desktop app is free for everyone, so if you want to learn about the tool and work on projects, you can easily do so without paying anything. However, if you want to use Power BI Service and all its features, you would need to purchase a pro license at minimum, starting at $13.70. For more details on pricing, please visit the mentioned link.
  • Abundant Data Sources – Power BI connects to a variety of data sources, and has great integration with Microsoft’s proprietary services such as Excel, SQL Server. If you ever come across a data source that you’ve not heard of, and want to see if it is compatible with Power BI, there is a good chance it will be available in the list.
  • Monthly Updates – The good thing about Power BI is that it is updated every single month. The developers over at Microsoft are constantly adding or improving features all the time, so over time Power BI has become a refined product.
  • Mainstream BI Tool – Since Power BI is one of the leading BI solutions out in the market, the support in the community is quite impressive. If you ever run in to a problem, there is a good chance other have come across it too, and you can easily google the problems to find their solutions. On top of this, there are tons of resources online from where you can learn about the tool. You can watch videos on YouTube, read articles, study courses on various websites to learn more about the tool. Personally, I use SQLBI and DAX Guide quite a lot, and also YouTube Channels such as Curbal to learn more about the features within Power BI. This tool has also become a major requirement in the market if you want to become a Data Analyst / BI Developer. So, if you know how to work on the tool, it would definitely help you stand-out in the interview process.
  • Custom Visuals – One key advantage of using Power BI Desktop is its ability to use Custom visuals. If your requirement cannot be fulfilled with the default visuals, you can always visit the store and search for the visual type you’re looking for. Although I would mention here that it takes time to search for a particular visual, it’s still safe to say that the tool has a variety of visuals to choose from.

Here are some of the cons of using Power BI. Of course, there are minute details, but these are the major issues I can think of:

  • Takes Time to Learn – As Power BI comes with a lot of features, it definitely takes some time to become good at it. You are constantly learning new things as you encounter various situations. Power BI is not just a simple drag-and-drop type of BI tool; it comes with a complete suite of products and features. So, learning most of the things comes with experience and effort.
  • Performance – One thing I have noticed over time with Power BI is the performance. With big data, your reports can become quite slow. In order to reduce the issue, you have to optimize the reports by applying various techniques. Applying the optimization techniques requires knowledge and experience, so if you’re new to the BI domain, optimization can become a major hurdle and you can end up with a report that the end-user can’t even see.

Power BI vs QuickSight: The Comparison

Now that we have highlighted the key features, as well as pros and cons of using the two BI solutions, we are ready to make a brief comparison between the two. To make things simple, I have created this table which highlights the major differences between them.


The two BI solutions that we have discussed here have their unique features and target markets. Both have their ups and downs, and if I have to pick a tool, I would definitely choose Power BI because of the reasons I have highlighted in the article. Power BI definitely fits the need of majority of the users, but you can always use QuickSight if your reporting is simple and you do not require all the features that Power BI provides. We do, however, hope that QuickSight covers up for the lost time and catches up to its competitors by adding new features consistently over time, so that it challenges the top contenders and takes a fair share of the market.

Using Typescript with React Native

150 150 DevGate

Author: Shaban Qamar,

We all love JavaScript as it is the common language to build react native apps. But some of us also love types. Luckily, options exist to add stronger types to JavaScript. Our favorite is Typescript, but React Native Supports Flow out of the box. Today, we’re going to look at how to use Typescript in React Native apps.

Commands which are used

To create react-native with JavaScript we use this command:
npx react-native init

To create react-native app with typescript we use this command:
npx react-native init –template react-native-template-typescript
However, there are some limitations to Babel’s TypeScript support


Since you might be developing on one of several different platforms, targeting several different types of devices, basic setup can be involved. You should first ensure that you can run a plain React Native app without Typescript. When you’ve managed to deploy to a device or emulator, you’ll be ready to start a Typescript React Native app.
You will also need Node.js, npm, and Yarn.


Once you created the basic React Native project, you’ll be ready to start adding TypeScript. Let’s go ahead and do that.

Adding TypeScript

The next step is to add TypeScript to your project. The following commands will:

  • add TypeScript to your project
  • add React Native TypeScript Transformer to your project
  • initialize an empty TypeScript config file, which we’ll configure next
  • add an empty React Native TypeScript Transformer config file, which we’ll configure next
  • adds typings for React and React Native

The tsconfig.json file contains all the settings for the TypeScript compiler. The defaults created by the command above are mostly fine, but open the file and uncomment the following line:
/* Search the config file for the following line and uncomment it. */
// “allowSyntheticDefaultImports”: true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */

The rn-cli.config.js contains the settings for the React Native TypeScript Transformer. Open it and add the following:
module.exports = {
getTransformModulePath() {
return require.resolve(‘react-native-typescript-transformer’);
getSourceExts() {
return [‘ts’, ‘tsx’];

Rename the generated App.js and App.js files to App.tsx. index.js needs to use the .js extension. All new files should use the .tsx extension (or .ts if the file doesn’t contain any JSX).

If you tried to run the app now, you’d get an error like object prototype may only be an object or null. This is caused by a failure to import the default export from React as well as a named export on the same line. Open App.tsx and modify the import at the top of the file:

import React, { Component } from ‘react’;
import React from ‘react’
import { Component } from ‘react’;

Adding TypeScript Testing Infrastructure

React Native ships with Jest, so for testing a React Native app with TypeScript, we’ll want to add ts-jest to our devDependencies.
Then, we’ll open up our package.json and replace the jest field with the following:
“jest”: {
“preset”: “react-native”,
“moduleFileExtensions”: [
“transform”: {
“^.+\\.(js)$”: “/node_modules/babel-jest”,
“\\.(ts|tsx)$”: “/node_modules/ts-jest/preprocessor.js”
“testRegex”: “(/__tests__/.*|\\.(test|spec))\\.(ts|tsx|js)$”,
“testPathIgnorePatterns”: [
“cacheDirectory”: “.jest/cache”
This will configure Jest to run .ts and .tsx files with ts-jest.
Ignoring More Files

Installing Dependency Type Declarations

To get the best experience in TypeScript, we want the type-checker to understand the shape and API of our dependencies. Some libraries will publish their packages with .d.ts files (type declaration/type definition files), which can describe the shape of the underlying JavaScript. For other libraries, we’ll need to explicitly install the appropriate package in the @types/ npm scope.

For example, here we’ll need types for Jest, React, and React Native, and React Test Renderer.

yarn add –dev @types/jest @types/react @types/react-native @types/react-test-renderer

We saved these declaration file packages to our dev dependencies because this is a React Native app that only uses these dependencies during development and not during runtime. If we were publishing a library to NPM, we might have to add some of these type dependencies as regular dependencies.

Ignoring More Files

For your source control, you’ll want to start ignoring the .jest folder. If you’re using git, we can just add entries to our .gitignore file.

# Jest

As a checkpoint, consider committing your files into version control.
git init
git add .gitignore # import to do this first, to ignore our files
git add .
git commit -am “Initial commit.”

After adding and doing all the steps told above you are good to go you can create screens and components just like you do in JavaScript but remember you are no longer working on JavaScript its typescript so you have to work according to the environment you just have set up.
To run the project just type the following command:

For Android:

npx react-native run-android

For IOS:

npx react-native run-ios

GitHub Branching Strategy

150 150 DevGate

Author: Muhammad Raza Saeed,

There is already a lot of contention and debate around using Git Flow vs GitHub Flow branching model since there are trade-offs to using either. This is a concise summary.

  • For teams who must make formal releases on a longer time scale (a few weeks to a few months between releases) and be able to perform hotfixes, maintenance branches and other things that emerge from shipping so infrequently, git-flow makes sense.
  • It is advisable to choose something simpler, like GitHub Flow, for organizations who have established a culture of shipping, push to production frequently (if not daily), and are constantly testing and deploying.

However, comparing Git Flow vs GitHub Flow is not the goal of this report. The purpose of this is to promote the use of the most straightforward Branching Model that will work for all potential project teams. The “Branching Strategy” and the GitHub “Workflows” across projects need to be standardized immediately utilizing the “Perspective” method. Utilizing the Git Flow model is recommended.

Key Benefits of Git-Flow Branching Model:

  • Parallel Development:
    GitFlow is useful because it isolates new development from completed work, which makes parallel development simple. Future branches are used to work on new features and non-emergency bug fixes, and they are only merged back into the main body of code if the developer is satisfied that it is ready for release.
    Despite the possibility of interruptions, all you must do to go from one task to another is commit your modifications and then make a new feature branch for it. Check out your original feature branch once the task is complete to pick up where you left off.
  • Collaboration:
    Feature branches also make it simpler for two or more developers to work together on the same feature because each feature branch only contains the changes required to make the new feature functional, that makes it very simple to view and understand what each collaborator is doing.
  • Release Staging Area:
    As the new development is completed, it gets merged back into the develop branch, which is the staging area for all completed features that haven’t yet been released. So when the next release is branched off to develop, it will automatically contain all of the new stuff that has been finished.
  • Support For Emergency Fixes:
    GitFlow supports hotfix branches branches made from a tagged release (or master branch). You can use these to make an emergency change, safe in the knowledge that the hotfix will only contain your emergency fix. There’s no risk that you’ll accidentally merge a new development at the same time.

Branches Explained:

Main  Branches:

  • Master – this branch will confirm stable code running in production. Projects should consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.
  • Develop – this is often referred to as the “integration branch”. It is also the starting point of the feature. When the source code in the origin/develop branch reaches a stable point and is ready to be released, projects should create a release branch.

Supporting Branches:

  • Feature – every time there’s a new feature to be implemented, a new branch needs to be created following this pattern feature/<Jira_storyID>-<summary>. Must merge back into the develop branch.
  • Release – ideally this branch should be used for UAT releases. The key moment to branch off a new release branch from develop is when the develop branch reflects the desired state of the new release. Should be merged to the master branch once a release (or UAT) is complete and all UAT fixes should be backward merged into develop.
  • Bugfix – a bugfix branch should be used for fixing UAT bugs. They are branches from release branches and once the UAT bug is fixed, change is merged back to the release branch.
  • Hotfix – a hotfix branch is a lot like release branches and feature branches except they are branched from master instead of develop. When a critical bug in a production version must be resolved immediately, a hotfix branch needs to be branched off from the corresponding tag on the master branch that marks the current production version.

Supporting Branches – Prefix Conventions:

  • Feature -> feature/**
  • Release -> release/**
  • Hotfix -> hotfix/**
  • Bugfix -> bugfix/**

Conventions of Git-Flow Approach:

  • Using short lived branches.
  • When a feature is completed a Pull Request is created to merge with develop branch. This allows code review and integration tests to be verified before merging.
  • The new feature to be developed needs to follow similar syntax like feature/<Jira-storyID>-<summary>
  • The hotfix to be developed needs to follow the syntax like hotfix/<IssueID>-<summary>
  • The release to be created needs to follow the syntax like release/<version>
  • The develop branch is the main developer’s integration branch.
  • The master branch always reflects the current code from production.

SDLC Overview:

Development Phase:
New development (new feature, sprint bugs) is built into feature branches. Feature branches are branched off from develop branch, and finished features and fixes are merged back into the develop branch once ready.

UAT and GO Live Phase:
When it is time to make a release, a release branch is created from develop. The code in the release branch is deployed onto UAT. UAT bugs are recreated and fixed in the bugfix branch. This deploy -> test -> fix -> redeploy -> retest cycle continues until customer is happy that release is good enough to release into production for end users.
When the release is finished, the release branch is merged into master. And backward merged into develop to make sure that any changes made in the release branch aren’t accidentally lost by new development.

Post GO Live Phase:
The master branch has the production code. Therefore, it is important to tag the master branch with version of the production release. The only commits to master are merges from release branches and hotfix branches.

KMM as a New Approach to Cross-platform App Development

150 150 DevGate

What is KMM?

KMM stands for Kotlin Multi-platform Mobile and it’s a new way to develop mobile apps. It’s a combination of both native and cross-platform approaches. Within this method, we can write the common server logic for several platforms. By server logic we mean not the backend development but the “server layer”. It is a part of the application that exchanges data between the app on the phone and the server.

At the same time, the UI part will be separate for each platform. For example, there will be one code for iOS and a different one for Android. For more detailed information, you can check the official website.


  • In KMM, they write server logic using Kotlin. Kotlin is originally an Android app development programming language. The problem here is that not a lot of iOS developers know this language to use it in app development. So it could not be very easy to find a team that can handle this approach.
  • The novelty of the framework. KMM is still in the beta stage. That means that no one can guarantee its stability. So if you make a decision to create your mobile apps using KMM, you will probably need a maintenance team in case of any errors that may appear. The good news here is that the Kotlin team has promised to release the alpha this year.


The way payroll loans work varies by loan type. For example, if you’re using a merchant cash advance to cover your payroll expenses, you might repay your loan in daily increments. That’s a very different model than if you were to use SBA 7(a) loans for payroll financing where you’ll repay your loan in predictable monthly installments over a longer period of time.

Another thing to consider: With payroll loans, you might need funding sooner than later to pay your team. If that’s the case, your options might be limited to loans that offer rapid funding – and these loans often have terms that are unfavorable. You might get saddled with high interest rates and short repayment periods, resulting in higher monthly payments compounded by those high interest rates. If possible, it’s generally best to get a borrower-favorable loan, even if you have to wait longer for your funding.


Despite the fact that KMM appeared quite recently, it already has many fans among the well-known brands. For instance, it helps Netflix optimize speed and product reliability. Leroy Merlin uses KMM in their mobile app. Among the KMM users, you can also find such giants as Philips, Cash App, VMWare, Quizlet, Autodesk, and many others.

If you are thinking about cross-platform app development services for your idea, you really should consider KMM.