Learn more about technology today.

How to Publish React Native App to Google Play Store

150 150 DevGate

Author: Haider Ali,

For creating cross-platform mobile applications, React Native is a well-liked framework. The next step after developing your React Native app is to publish it on app stores like the Google Play Store. Registering as a Google Play developer, getting the app ready for release, making a release build, and submitting the app for review are all necessary before an app can be published on the Google Play Store.

The processes necessary to publish your React Native app on the Google Play Store are outlined in this guide. Regardless of whether you are an experienced developer or new to the field of mobile app development, this article will give you a clear and comprehensive overview of the steps necessary to have your app accepted by the market.

Your Android application must be signed with a release key before you can distribute it through the Google Play store.You should save this Key because this key must also be used for all upcoming updates. Since 2017, Google Play has been able to manage signing releases automatically thanks to the feature of App Signing by Google Play. But, your application binary needs to be signed with an upload key before it is submitted to Google Play. The problem is thoroughly covered on the Signing Your Apps page of the documentation for Android developers. The steps necessary to package the JavaScript bundle are listed in this guide along with a quick overview of the procedure.

Generating an upload key

With keytool, you can create a private signing key.

For Windows:

For Windows, keytool must be executed as administrator from

After running this command, you will be asked for password for the keystore and key and Name fields for your key. The key store is then created as a file with the name “my-upload-key.keystore.”

There is just one key in the keystore, and it is good for 10,000 days. Remember to write down the alias because you will need it later when signing your app.

For macOS:

If you’re not sure where your JDK bin folder is on macOS, run the following command:

The JDK directory that is produced by this command will look something like this:

Use the cd command to get to that directory, then run the keytool command with sudo access as following:

Setting up Gradle File

  • Put the my-upload-key.keystore file in your project folder’s android/app directory.
  • Add the following (replacing ***** with the right keystore password, alias, and key password) to the file /.gradle/gradle.properties or android/gradle.properties

These are going to be global Gradle variables, which we can later use in our Gradle config to sign our app.

Adding signing config to your app's Gradle config

The final configuration step is to set up release builds to be uploaded key-signed. In your project folder, edit the file android/app/build.gradle and add the following signing config.

Generating the release apk

After the configuration if you want to make release apk run the following command:

And if you encounter any error then try this command:

Generating the release AAB

After the configuration if you want to make release aab run the following command:

The created AAB, which is ready to be posted to Google Play, may be located under android/app/build/outputs/bundle/release/app-release.aab.

Enabling Proguard to reduce the size of the APK (optional)

Proguard is a tool that can slightly reduce the size of the APK.

To enable Proguard, edit android/app/build.gradle:

* Run Proguard to shrink the Java bytecode in release builds.

RESTful API Design: Best Practices and Tips

150 150 DevGate

Author: Muhammad Ehtasham,

As an ever-increasing number of applications move to the cloud and the requirement for web services keeps on developing, designing a RESTful API has turned into a fundamental expertise for engineers. A very much planned API can make it more straightforward for clients to connect with your application and can work on the general execution and security of your framework. In this article, we’ll explore probably the best practices and ways to plan a RESTful API.

What is a RESTful API?

Representational State Transfer, or REST, is an architectural approach to software development that outlines a number of guidelines for developing web services. A RESTful API is one that complies with these restrictions and is made to be straightforward and user-friendly. It communicates with resources using HTTP methods including GET, POST, PUT, and DELETE, and it employs HTTP status codes to let the user know how the request is progressing.

Best Practices for RESTful API Design

Resource Names Should Be Nouns: Resources are the foundation of a RESTful API. Instead of using verbs, name your resources with nouns. For example, use /users instead of /getUsers.

  • Use HTTP Methods Properly: Use the correct HTTP method for the kind of activity you want to carry out on a resource. Use GET to obtain resources, POST to add new resources, PUT to update already existing resources, and DELETE to remove resources.
  • Utilize HTTP Status Codes: To describe the request’s status, use HTTP status codes. Use 200 for a successful answer, 404 for a resource that cannot be located, and 500 for a server fault, for instance.
  • Consistent Resource Naming: Keep your API’s resource naming consistent. Use /users/id to get a specific user, for instance, if /users returns a list of users.
  • Use Pagination to Limit Huge Data Sets: Pagination can be used to control how much data is returned in response to a single request. Huge data sets can be managed more easily, which could improve the performance of your API.
  • Use Query Parameters for Filtering: To filter the information your API delivers, use query parameters. Customers may find it easier to access the information they need and the amount of data provided in response to a single request may decrease as a result.Links to pertinent websites should be provided using HATEOAS (Hypermedia as the Engine of Application State). Customers might discover new resources and find it easier to utilize your API as a result.
  • Use Versioning: Manage API modifications using versioning. By doing so, breaking changes can be avoided, and maintaining backward compatibility will be simpler.Employ authentication and authorization to control access to your API. This can enhance your system’s security and stop illegal access.

Tips for RESTful API Design

  • Make It Simple: Maintain a straightforward, user-friendly API. Maintain simplicity, and make sure your documentation is unambiguous and understandable.
  • Design for Scale: While developing your API, consider scaling it. Consider how a lot of traffic and enormous data will be managed.
  • Design for Performance: Make an API that loads quickly. Think about how it will respond to queries as you make it faster.
  • Implement error handling: Provide error handling for the delivery of clear and instructive error messages. Clients may be able to comprehend and address problems with their requests as a result.
  • Utilize Caching: Employ caching to enhance your API’s performance. By doing this, you can lower the volume of queries sent to your system and speed up response times.


Although creating a scalable, secure, and user-friendly RESTful API can be challenging, it is possible if you follow best practices and recommendations. By using the appropriate HTTP methods, status codes, and resource names, you can create an API that is understandable and straightforward to use. By using HATEOAS, query parameters, and pagination, you can make it easier for users to access your API. Versioning, authentication, and authorization can improve your system’s security and maintainability.
When designing a RESTful API, it’s critical to keep the ideas of performance, scalability, and simplicity in mind. By building with these concepts in mind, you may create an API that fits the needs of both your system and your clients.
In addition to these recommendations and best practices, there are additional considerations to be made while developing a RESTful API, such as API documentation, testing, and monitoring. You can be sure that your API is working correctly and meeting client requests while also guaranteeing that developers can use it efficiently when it is tested and monitored.
Creating a RESTful API is a crucial skill for developers using web services, to sum up. You may design an API that is straightforward, user-friendly, scalable, and safe by adhering to best practices and recommendations for resource names, HTTP methods, HTTP status codes, pagination, filtering, HATEOAS, and versioning.

Data Build Tool (DBT), An Emerging Data Transformation Tool

150 150 DevGate

Author: Gulraiz Hayat,

DBT or Data build tool is a cloud based open-source tool that is slowly taking over the data world. It helps data analysts and engineers transform and manage data in their data warehouses. It is often used in conjunction with data modeling and business intelligence tools like Looker, Tableau, and Power BI.
The main purpose of dbt is to build and manage data pipelines by transforming raw data into analytics-ready data in a data warehouse. dbt does this by allowing users to define their data models in SQL and then automatically transforming raw data into these models. DBT can also automatically generate documentation for these models, making it easier for users to understand the relationships between different datasets and the transformations that were performed on the data.
DBT is designed to work with a wide range of data warehouses including Snowflake, BigQuery, Redshift, and Postgres, and is highly extensible, with a large and growing community of developers contributing to its development.

Languages used in DBT:

DBT supports the use of languages mentioned below:

1. SQL
2. Jinja
3. Python

DBT (Data Build Tool) primarily uses SQL for defining and building data models, transformations, and other data artifacts. It also supports Jinja, a templating language that allows for more dynamic and flexible SQL queries.

In addition to SQL and Jinja, DBT also supports Python for more advanced use cases, such as custom macros, tests, and models. Python can be used to extend DBT’s functionality and integrate with other tools or systems.

Finally, DBT also supports YAML, a markup language used for configuration files, such as defining data source connections and other project settings. YAML is used to configure and customize DBT projects, allowing users to define their data pipeline and transformations in a structured and repeatable way.

Distinguishing features of DBT:

There are numerous applications for DBT. The following are some typical use cases:


You can create models in DBT without the hassle of first creating a table then inserting values into it. You just write the ‘select’ statement and DBT does the rest. This quick process of creating tables is the modularity of DBT.

Creating and Managing Data Pipelines

DBT can be used to create optimized SQL code that can be executed against a data warehouse or other data storage system after data models are defined using SQL. Users may create and maintain a scalable data architecture thanks to this.

Data quality and Integrity Assurance

DBT offers a variety of capabilities that make it simpler to guarantee data quality and integrity. This contains the capacity to carry out tests for data validation and trace the history of data to comprehend how it has changed through time.

Standardization of Data Transformation Processes

DBT offers a uniform and standardized approach to data transformation and analysis, making it easier for data analysts and engineers to deal with data. This can facilitate the extraction of insights and the use of data to inform business choices, helping firms to improve the quality and dependability of their data.

Collaboration Made Easy:

The creation of a collaborative environment for data teams is made possible by dbt, which enables data analysts and engineers to collaborate on the same data models and transformations. This can facilitate better communication and cooperation across data teams and facilitate completion of challenging data projects

Use Cases:

DBT is a versatile tool that can be used in a variety of use cases, including:

Data Warehousing: DBT enables you to build, manage, and maintain data pipelines that transform data in a cloud data warehouse. With dbt, you can extract data from multiple sources, transform it, and load it into your data warehouse.

Analytics: DBT enables you to build analytics-ready data pipelines that can be used to build dashboards, reports, and visualizations. With dbt, you can transform data in a way that makes it easy to analyze and visualize.

Machine Learning: DBT can be used to prepare and transform data for machine learning models. With dbt, you can join, filter, and transform data in a way that makes it suitable for machine learning.

DBT On Premises and DBT Cloud:

DBT can be used on cloud using the IDE integrated on their website. You can use this link: https://cloud.getdbt.com/
You need to sign up and then you can start using it. You have to tell it the target and source database. dbt makes all the basic files needed for the starting the project. You can click on the develop tab and see the IDE as shown below:

The other way to use dbt is on premises. You can make a clone of the GitHub repository containing all dbt files in your local machine. Then you can run it on any of your favorite IDE, after integrating it with DBT. DBT on premises is much faster in running the query but it does not have the feature of previewing the model as dbt cloud does. Another distinguishing feature of dbt cloud is that it creates lineage of all the sources being used in creation of a table.

How to Use DBT:

Inside the files that DBT creates automatically, you can find the models folder. Inside this folder you can create your SQL scripts. You can set the name of the file exactly as the table that needs to be created in your target database. You have to write the ‘select’ query in this file and save it. dbt will automatically create the table in target database after you run that model using ‘dbt run’ command.

Pros and Cons of DBT:


Ease of Use: DBT is easy to learn and use, especially if you are already familiar with SQL. It simplifies the development of data pipelines by enabling data teams to write modular, scalable, and well-documented SQL code.

Modular Design: DBT’s modular design makes it easy to build and maintain pipelines over time. You can break down complex pipelines into smaller, reusable components, which makes it easier to manage and maintain the codebase.

Version Control: DBT’s integration with Git enables teams to collaborate effectively and maintain high-quality code over time. With Git, you can track changes, review code, and roll back changes if necessary.

Flexibility: DBT is a versatile tool that can be used in a variety of use cases, including data warehousing, analytics, and machine learning. It allows you to transform data in a way that makes it easy to analyze and visualize.

Open-Source: DBT is an open-source tool that is free to use and can be customized to fit specific use cases. The community around dbt is active and provides support, making it easy to get help when you need it.


SQL-Based: DBT’s SQL-based approach may not be suitable for users who prefer to use other programming languages. While dbt supports Python, users who prefer to use other languages may find dbt less appealing.

Limited Functionality: DBT’s primary focus is on data transformation, which means that it may not be suitable for users who require more advanced ETL functionality. Users who require more advanced functionality may need to use additional tools in conjunction with dbt.

Learning Curve: While DBT is relatively easy to use, there is still a learning curve involved. Users who are new to SQL or data pipelines may need some time to get up to speed with dbt.

Cloud Dependency: While DBT can be used locally, it is designed to work in the cloud. This means that users who prefer to work locally may find dbt less appealing.

Lack of Native Visualization: DBT does not provide native visualization capabilities, which means that users need to use additional tools to create dashboards and visualizations based on their data.


DBT is a powerful and versatile tool that simplifies and automates the development of data pipelines in a cloud data warehouse. It enables data teams to build modular, scalable, and well-documented pipelines that transform data into analytics-ready outputs. DBT’s ease of use, modular design, version control, flexibility, and open-source nature make it an attractive option for data teams looking to streamline their data pipelines. While dbt may not be suitable for all use cases, it is a tool that is worth considering for any organization looking to improve their data pipeline development process.

In short, DBT is super easy to use and makes the T part in ETL as quick as possible.

How to build a new ReactJS app and Why ReactJS is a popular choice for web development?

150 150 DevGate

Author: M. Ahmed Fraz,

ReactJS is a JavaScript library that is used to build user interfaces. ReactJS is well-known for its speed, scalability, and simplicity, making it an excellent and ideal choice for developing large-scale web applications. It is one of the most popular front-end frameworks

Why Use ReactJS?

ReactJS has several benefits that make it a popular choice for web development nowadays. Following are a few of the key advantages and features of using ReactJS:

Reusable Components:

Reusable components are an important element of ReactJS. ReactJS components are generally small, self-contained pieces of code that can be used anywhere across the entire application, making it faster and more efficient in development. ReactJS allows developers to create reusable components that can be used across applications.

This means that each component should focus on a specific aspect of the UI and should be responsible for its own rendering and managing its state separately.

Developers can utilize ReactJS to develop reusable and scalable components that can be reused across numerous pages or applications. This reusable approach can save time and cuts down on the number of lines of code that needs to be written. Reusable components are a fundamental concept in ReactJS, and they are essential for building scalable, maintainable, and efficient applications

Virtual DOM:

ReactJS allows developers to make use of a virtual DOM, which is a lightweight representation of the actual DOM. This process speeds up the functionalities to update the user interface because just the elements of the DOM that have changed need to be updated. The Virtual DOM provides advantages such as faster rendering times, enhanced efficiency, and a better user experience.

The Virtual DOM reduces the amount of work that the browser has to do by minimizing the number of updates to the real DOM, which can improve the application’s overall speed and responsiveness. Besides that, the Virtual DOM can help prevent the occurrence of common issues such as layout thrashing, in which multiple changes to the real DOM cause unnecessary reflows and repaints.

One-way Data Binding:

ReactJS uses a one-way data binding approach, which is easier and more effective to manage the application state. One-way data binding means that data flows in a unidirectional modal, which is either from top to bottom or from the parent component to its child components and so on, and changes in a child component’s state are communicated to the parent component via callbacks.

Declarative Programming:

ReactJS utilizes declarative programming, which means that the developers simply need to declare what they want the application to do, and ReactJS will handle the details. This approach to writing code makes writing clean, maintainable, and reusable code. ReactJS allows developers to describe how the user interface of the application should look based on the current application state.

Declarative Programming is more accessible and user-friendly than the typical imperative approach, which requires developers to declare how the front end should update based on changes in the application state.

Large Community:

ReactJS has a strong and active developer community that contributes to the framework and helps other developers. The ReactJS developer community is a large group of programmers who are passionate about producing applications with the ReactJS library. Facebook and a community of individual developers and corporations cooperate to maintain ReactJS, an open-source JavaScript library for designing user interfaces.

Because ReactJS has a large and active developer community, there are numerous resources available for learning and debugging. There are also numerous third-party libraries and tools that integrate with ReactJS, such as Redux for state management and Next. js is used for server-side rendering.

Getting Started with ReactJS:

To get started with ReactJS, all you need to have a very basic understanding of HTML, CSS, and JavaScript. If you are new and not an expert in this field, It is okay. You don’t need to be an expert in that field to get started. Once you have a basic understanding of the above mentioned skills, you can follow these steps to get started with ReactJS and build your own website in no time:

Set Up Your Development Environment:

To develop ReactJS applications, Node.js and npm (Node Package Manager) must be installed on your computer in order to develop ReactJS applications which are mandatory for the environment. Node.js can be downloaded and installed by using the official website (https://nodejs.org/en/), and you don’t need to install npm separately, it will be installed automatically along with the installation of Node.js.

Create a New ReactJS Project:

Once Node.js and npm are installed on your system, you are good to go, you can use the create-react-app command to start a new ReactJS project, which is the most popular and easiest way to create a new react project. This will automatically create a folder of a new project with all of the required files and their dependencies.

Create a Component:

The next step is to create a component file for the interface and functionalities. For the instance, A component is a piece of reusable code that can be used on various sites or anywhere in the applications. To create a component, open a new JavaScript file and define the component. As simple as that.

Render the Component:

You can render the component in the browser after you have created it by adding and importing it into the main App.js file.

Add Interactivity:

State and props can be used to add interactivity to your ReactJS application. The state is used to manage data within a component or in any other component across the application, whereas props are used to pass data from one component to another component.


ReactJS is a powerful and popular front-end framework that makes it easy to build large-scale web applications. Its speed, scalability, and ease of use make it an ideal choice for developers. Getting started with ReactJS is easy, and with a little practice, you can start building your own applications in no time. If you’re new to ReactJS, start with the basics and gradually work your way up to more advanced topics like routing, hooks, and performance optimization.

Overall, ReactJS is a robust and effective library for creating user interfaces, with numerous advantages for both developers and end users. This is why it is more popular nowadays.

Information Technology Commerce Network – Asia Event 2023

150 150 DevGate

Author: Wishaal Shahid,


ITCN Asia serves as a platform for next-generation technology and national digital transformation in the consumer technology, digital marketing, and enterprise hardware and software markets. The 21st ITCN Asia exhibition featured high-level conferences on the 3rd Digital Pakistan Summit, Information Security, Cloud Computing, Gaming & the Current State of the Industry in Pakistan, EdTech, and the Huawei Pakistan Cloud E-commerce Summit, attended by 50-plus ICT experts and futurists, including Secretaries and representatives from government departments, IT Minister, Senior Executives, and CIOs from leading firms. The conferences provided new research, networking, and solutions for the key verticals of Healthcare, Education, Finance, Agriculture, Governance, Cloud, Information Security & Fraud Prevention, and so on.

About ITCN-Asia:

The 22nd ITCN Asia – Information Technology & Telecom Show was held at the Pak China Friendship Centre in Islamabad from the 23rd to the 25th of February 2023. The event gathered Pakistan’s entire tech ecosystem, as well as IT professionals from both the public and private sectors, under one roof to witness the country’s largest tech festival.

ITCN-Asia was based on three days of activities, providing a comprehensive platform to showcase solutions for all major economic verticals, including but not limited to Government, Cloud, Data Centers, Cybersecurity, Education, Banking & Finance, Health & Pharma, Ecommerce, Artificial Intelligence, and Robotics, with a focus on networking, knowledge sharing, and lead generation, and a series of conferences to create a learning environment for like-minded people to share knowledge. The energy was perceptible as thousands of attendees and hundreds of exhibitors gathered to see the most recent technological advancements. The three-day event was jam-packed with events such as keynote addresses, seminars, and panel discussions, among others.
Exhibiting at ITCN Asia provides a valuable opportunity for any team to network with new clients, demonstrate their skills, and pick the brains of other business professionals. Here is a closer look at what it was like to exhibit at this event for the DevGate team.

Devgate Experience at ITCN-Asia:

Making a strong; impression is crucial at such events. Since there were so many booths from competing businesses, we needed to stand out in the exhibition hall. It had to be done by designing an inventive booth, holding engaging demos, or using persuasive marketing materials. So before the event, the DevGate team organized everything to ensure that we made the most of our time at ITCN Asia. It included designing and building an eye-catching booth that showcased our products and services, preparing marketing materials like brochures and booklets, and training staff members to engage with attendees and answer questions.

There were seminars, workshops, and other learning opportunities that covered a wide range of IT topics alongside exhibitors. These sessions were a valuable source of information for us, helping us stay up-to-date on the latest trends and technologies.

Highlights of The Event:

One of the major highlights of the event was meeting other industry professionals. We got along with other IT teams from different cities to exchange ideas and practical knowledge. We were able to know what other businesses are working in the market. We also identified gaps that we can work on later on. We also had the opportunity to view their portfolios, asked for demos of their products, and learned about the most recent trends and advancements in the IT sector.

There were seminars, workshops, and other learning opportunities that covered a wide range of IT topics alongside exhibitors. These sessions were a valuable source of information for us, helping us stay up-to-date on the latest trends and technologies.


Despite the long hours, and hard work, our team left ITCN Asia feeling energized and inspired. We made connections with future clients, learned insightful things, and got the chance to present our ideas to a large audience. It was a wonderful experience for us to exhibit at ITCN Asia, and we look forward to participating in more such events in the future.

Introduction to Docker Containers

150 150 DevGate

Author: Qamar Khurshid,

Docker containers are a popular and efficient way to package and deploy applications, and the Docker command-line interface (CLI) provides a convenient way to manage and deploy containers. In this blog post, we’ll take a closer look at the Docker CLI and some of its basic commands, and explain how to use them to deploy Docker containers.

The Docker CLI is a tool that allows users to interact with Docker from the command line, and provides a wide range of commands for managing and deploying Docker containers. Some of the most commonly used Docker CLI commands include:

docker run: This command is used to run a Docker container. It takes a Docker image as input, and creates a new container based on that image.

docker ps: This command lists all running Docker containers on the host machine.

docker stop: This command stops a running Docker container. It takes the container’s name or ID as input.

docker rm: This command removes a stopped Docker container. It takes the container’s name or ID as input.

docker build: This command is used to build a Docker image from a Dockerfile. A Dockerfile is a text file that contains the instructions for building a Docker image.

To deploy a Docker container, you first need to create a Docker image. This can be done using the docker build command, which takes a Dockerfile as input and produces a Docker image as output. Once you have a Docker image, you can use the docker run command to create a new container based on that image, and then start the container using the docker start command.

For example, let’s say you have a simple Node.js application that you want to deploy as a Docker container. First, you would create a Dockerfile that specifies the instructions for building a Docker image for the application. This might look something like this:

Next, you can use the docker build command to build a Docker image from the Dockerfile:

docker build -t my-node-app .

This will create a Docker image named my-node-app based on the instructions in the Dockerfile. Once you have the Docker image, you can use the docker run command to create and start a Docker container based on the image:

docker run -d -p 3000:3000 –name my-node-app my-node-app

This will create a new Docker container named my-node-app, and start it in detached mode (-d). It will also map port 3000 on the host machine to port 3000 on the container (-p 3000:3000), which will allow you to access the application from the host machine.

To verify that the container is running, you can use the docker ps command, which will list all running Docker containers on the host machine:

docker ps

This will show the running Docker containers, along with their names

Lazy Loading with React

150 150 DevGate

Author: Muhammad Fraz,

The world of front-end development is constantly evolving, and people are creating more and more complex and powerful applications every day. Naturally, this led to massive code bundles that can drastically increase app load times and negatively impact the user experience. This is where lazy loading comes in.

What is Lazy Loading?

Lazy loading is a design pattern for optimizing web and mobile apps.

At the point when we launch a React web application, it normally packages the whole application immediately, stacking everything including the whole web application pages, pictures, content, and considerably more for us, possibly bringing about a sluggish burden time and overall poor performance, depending on the size of the content and the internet bandwidth at the time.

In earlier versions of React, lazy loading was implemented using third-party libraries. However, React JS introduced two new native functions to implement lazy loading with React v16.6 update.

In this tutorial, we’ll show you how lazy loading works in React.js, demonstrate how to use code splitting and lazy loading with React.lazy and React.Suspense, and create a React demo app to see these concepts in action.

The Benefits of lazy loading

The essential benefits of languid stacking are execution related:

  • Fast initial loading: By decreasing the page weight, lethargic stacking a site page considers a quicker starting page load time.
  • Less bandwidth consumption: Lazy-loaded images save information and transfer speed, especially valuable for people who don’t have fast internet.
  • Decreased work for the browser: When pictures are lazy-loaded, your browser does not need to process or decode them until they are requested by scrolling the page.

React.lazy() is a function that empowers you to ender a dynamic import as a regular component. Dynamic imports are a method of code-splitting. It takes out the need to utilize an third party library, for example, react-loadable, react-waypoint

// without React.lazy()
import NewComponent from ‘./NewComponent ‘;

const MyComponent = () => (


// with React.lazy()
const NewComponent = React.lazy(() => import(‘./NewComponent));

const MyComponent = () => (


React.Suspense enables you to determine the loading indicator in the event that the components in the tree below it are not yet ready to render.
When all the lazy components are loaded, other React elements can be shown as placeholder content by passing a fallback prop to the suspense component. it allows you to define the loading indicator if the components in the tree below it are not yet ready to render.

import React, { Suspense } from “react”;

const LazyComponent = React.lazy(() => import(‘./NewComponent)); const LazyComponent1 = React.lazy(() => import(‘./NewComponent1));

const MyComponent = ( ) => (



{* Here you can add more lazy components.. *}


The Disadvantages of lazy loading

As already mentioned, lazy loading has many advantages. However, overuse can have a significant negative impact on your application. Therefore, it’s important to understand when you should and when not to use lazy loading. The disadvantages are mentioned below

  • Not suitable for small-scale applications.
  • Requires additional communication with the server to fetch resources.
  • Can affect SEO and ranking.

AWS QuickSight vs Microsoft Power BI

150 150 DevGate

Author: Muhammad Zaki Khurshid,

Business Intelligence – BI tools such as AWS QuickSight, Tableau, Power BI, IBM Cognos (and many more) are designed to assist companies in generating business insights with the help of visuals. Since BI market is highly competitive, therefore, the companies that have provided these solutions have added distinct features in order to target a certain customer base that might use those features in their business requirements.
In this article, we shall make a brief comparison between the two Business Intelligence solutions: AWS QuickSight and Microsoft Power BI. We will first talk about the two technologies separately, highlighting the key features that each tool provides, and also the pros and cons of using the two.

AWS QuickSight

AWS QuickSight is a cloud-based BI solution (which runs on Amazon Web Services Platform) that you can use to build visuals, perform ad-hoc analysis, generate business insights, and share the results with others. AWS QuickSight connects to a variety of data sources, including AWS data (S3, Athena, Redshift etc.), third-party data, spreadsheet data, and more. QuickSight processes the data through SPICE, which stands for Super-fast, Parallel, In-memory Calculation Engine. Amazon claims that it is a robust in-memory engine that performs advanced calculations and serve data. If you want to create a dataset in QuickSight, you can either import it into SPICE, or perform a direct query (which is a way of directly querying the data instead of importing it into the tool). It is recommended to use SPICE to load the data so that QuickSight would be able to access it quickly and efficiently. On the other hand, direct query accesses the data by querying the source data directly, but this method is considered inefficient in QuickSight because data is queried every time a change is made in the analysis.


Here is a high-level architecture of QuickSight. As a summary, Data source connects to SPICE, which loads and processes (data cleaning, transformations etc.) the data. This data is then fed into QuickSight for data visualization.

Key Features

The visual presentation of QuickSight is one of its key selling points. Although the quantity of visual types might not be there, but for QuickSight, it’s about how it appeals to the end-user. Following are some of the major components/key features in AWS QuickSight:

  • Visuals – These are the components you use to represent your data in the form of visuals. You have bar charts, box plots, combo charts, heat maps, KPIs, line charts, and many more visual types that you can use to create meaningful reports.
  • Insights – As the name suggests, this feature allows you to generate insights with the help of built-in machine learning algorithms. This feature is quite useful because it allows you to interpret your data in a way that might add value to your analysis.
  • Sheets – These are like separate pages that you see in Power BI, where you can keep a group of visuals on a single page. You can have one sheet showing visuals that represent the sales of a company, and another page showing visuals related to inventory analysis.
  • Simplicity – Although this is not a proper ‘feature’, it is something certainly important for this BI tool to appeal to the market. Even people without much technical knowledge can easily explore data and extract valuable insights because of the simplistic and intuitive nature of the tool. You are most of the times performing simple operations (related to arithmetic, string, dates etc.) on the data, data type changes, and dragging and dropping fields on to the visuals.
  • Speed – Speed is a major selling point for QuickSight, due to its SPICE Engine.

Using all of the features provided by QuickSight, users can create meaningful, beautiful, and interactive reports to assist stakeholders in various business areas, such as:

  • Marketing.
  • Finance.
  • Sales.

Pros and Cons

Just like any other tool, QuickSight has its pros and cons. Here are some of the pros of using this tool.

  • Easy to Use – As mentioned earlier, QuickSight is very simple and intuitive to use. The users can configure and start using the tool in no time. It also takes less time to learn the tool, so if this is your first BI tool, working on the data and creating visuals would seem very easy.
  • Everything is on the Cloud – As QuickSight runs on the AWS platform, you don’t really need to set up anything on your system. You would just need a working AWS account, a subscription, and a network connection so that you can easily access the tool via web. Even if you have a low-end system, Quicksight would run flawlessly since everything is hosted on the AWS cloud platform. You can also access the tool via Android or IOS device, since the integration on mobile devices is also excellent, and would allow the users to view the content in a seamless manner.
  • Quality of the Visuals – QuickSight has some stunning visual types in its collection. Although they are limited in quantity, they can certainly make a huge difference in visual presentation.
  • Pricing – QuickSights pricing is pretty optimal for an average user, compared to other platforms. For more information on pricing, please visit the link here.
  • Speed – This is one of the key features of using QuickSight. Because of the SPICE engine, data loading and processing becomes a great experience for all levels of users.

Now that we have highlighted the pros, here are some of the cons of using QuickSight.

  • Limited Visual Types – As mentioned earlier, QuickSight has quality visuals, but they are limited in quantity. So, if you need a visual that is not present in the collection, you might need to look for an alternative within the available visuals set.
  • Simplicity – The ease of use and simplicity was highlighted as an advantage, but it’s also one of its major disadvantages. Now this totally depends upon the use-case. If your reports require simple data connectivity, simple calculations, and visuals that only need fields to be dragged and dropped on to them, then QuickSight is a great choice. But for cases where we have to perform high level transformations, calculations, and complex reporting, this tool is not an optimal one. For complex reporting, there are tools such as Power BI, which we shall talk about in the next section.
  • Still New to the Scene – Quicksight is still pretty new in the BI market. So, this solution has to play catch-up with other competitors such as Tableau, Power BI, in terms of adding new features which would support complex data processing, reporting, or sharing, so that it appeals to the mass market and in particular, the big corporations.

Now that we have talked briefly about QuickSight, let’s take a look at its competitor: Microsoft Power BI.

Microsoft Power BI

As the name suggests, Power BI is a Business Intelligence software product created by Microsoft. It combines business analytics, data visualizations, and best practices that help an organization make decisions. Power BI is also one of the leading BI solutions in the market, and many have ranked it as the best BI solution out there. Although the ranking is quite subjective, still, it is fair to say that Power BI is considered as a mainstream solution in the BI domain.


Let’s talk about the high-level architecture of Power BI. To demonstrate this, here is a diagram showing the various components of Power BI. If you’re already familiar with Power BI, you should notice that I have excluded Power BI Report Server from the diagram. While it is also a part of this eco-system, the only major difference between Power BI Report Server and Power BI Service is that the former is on premise report sharing platform, whereas the latter is cloud-based report sharing platform. With that said, here is the diagram.

Let’s break down this architecture diagram. Usually, these are the components of a report in Power BI:

  • Data Source – Power BI connects to a variety of data sources and uses the data from them to create reports. It can connect to databases (SQL Server, Redshift, Oracle SQL Server etc.), spreadsheets, JSON, XML, Sharepoint Folder, and many more. If you want to read more about the compatible data sources, click the link here.
  • Power BI Desktop – This is the desktop application that is used by developers to ingest the data, process the data (data transformations and modelling), create visuals and then publish the report to the cloud (Power BI Service), or on-premises server (Power BI Report Server). Power BI Desktop is primarily used for developing the report, so you would find all the options here that could be used to create a report specific to your requirement. This application is only available on Windows, so if you are using MacOS, you might need to install a VM and run the app there.
  • Power BI Service – This is the cloud platform that allows you to share reports with the stakeholders. The developers create reports using Power BI Desktop, and then publish them to Power BI Service so that the end user can generate insights, in order for them to make data-driven decisions. Power BI Service has various features, that allow you to create workspaces, configure dashboards, workspace apps, dataflows, and much more. Also keep in mind that Power BI Service is not for developing BI reports, so if you have any major changes that you need to make in the existing report, it will be done on Power BI Desktop.
  • Browser & Mobile Apps – Once the report has been published to Power BI Service, users can easily view it using their web browser, or a dedicated app that is present on Android or IOS devices. To see the reports, users would need to have access to their accounts.

Creating and Sharing a Report in Power BI

To create a new visualizations report, you would need to use the Power BI Desktop app, because as we mentioned earlier, this app is primarily used for report development purposes. If you don’t have the app installed, you can easily download it from Microsoft’s website, or you can download it from the Microsoft Store. Personally, I prefer the store option since it allows for automatic updates on the app. Once you open the app, you are welcomed with a beautiful UI of Power BI Desktop, which looks like this.

As you can see, we have the blank canvas at the center where you place all of your visuals, on top we have various options where you connect to the data sources, go to the Power Query Editor (which we shall talk about later), add Measures/Calculated Columns, go to view tab, etc. On the right, you have standard visual types which you can drag to the canvas to create a visualization. Apart from the standard visuals, you also have the option to download custom visuals from the built-in store. These custom-visuals are made by developers from around the world and are either paid, or free. On the left, you have three different views which are report view , table view , and modeling view . The report view is primarily used for creating visualizations, the table view for looking at the loaded data, adding calculated columns or tables, changing data types, and modeling view for creating relations between tables, hiding certain fields/tables, etc.
Now let’s jump to the first key component of creating a report, which is connecting to a data source. In Power BI Desktop, you can connect to a variety of data sources and create report using them. For reference, here is a snapshot of some of the data sources that you can connect to.

As you can see, the users can connect to Excel, XML, JSON, SQL Server, Oracle database, Azure Data Sources, and much more. You can also search for the data source you are looking for, since this window scrolls down to a lot of options. This goes to show that Power BI Desktop is compatible with majority of the data sources present out there.
Once the data source is connected, you can either start developing the report, or you can transform the data by using a built-in tool called Power Query Editor. This tool is built-in to the Power BI Desktop App and is one of its most important parts, since it allows you to clean and transform the data. Power Query Editor performs all the data processing using a language called M-query. Here is a brief overview of the UI of Power Query Editor. We won’t be going into the details of the tool, but this tool offers a lot of features that can be useful to your requirements.

Power Query Editor performs standard transformations, like changing data types, performing arithmetic/string/date operations, joins, group by, handling missing values, and much more. You can also implement Machine Learning and AI techniques on your data to generate various insights. On top of all of this, you can even write Python or R scripts on your data set to handle various issues that might not easily be done with the standard options available on Power Query Editor. All of these options can be used with the help of few clicks (Of course, Python and R would require script writing), and Power Query Editor automatically translates those transformations into equivalent M-Query Code. It also allows you to edit the M-Query, but usually that is done by more advanced users who are comfortable with the tool.
As you finish the data processing in Power Query Editor, you can load your data back to Power BI Desktop, where you model the data, and ultimately, create the report which you can share with the stakeholders. Once you are inside Power BI Desktop, you can model the data, by which we mean that we create relations between tables using common fields, and create Measures (which are functions that return scalar values), Calculated Columns, and Calculated Tables by using a language called DAX (Data Analysis Expressions). DAX language is used to perform various calculations within the report and can become quite complex based on the requirement you’re trying to fulfill. Unlike M-Query that we talked about earlier, DAX requires a lot of time and patience to become good at, as it is considered one of the hardest languages to master, since there are a lot of functionalities that it provides and based on the scenario, you are learning something new every time. But you should not worry too much about ‘mastering’ DAX, because naturally you become good at it over time and can navigate through problem relatively easily (Googling problems helps quite a lot though).
After modeling the data, you can start creating visuals inside the report canvas. You can use features such as bookmarks, tooltips, drill-down, drill-through in your report to make the experience more interactive. Just like QuickSight, you can also add multiple pages/sheets where you can group together a bunch of visuals that represent a certain analysis. You can format the visuals based on specification, use built-in machine learning and AI techniques, as well as create visuals with the help of R and Python. Once the report is created and ready to be shared, you can publish it to Power BI Service where you can collaborate with your colleagues (such as Quality Assurance Engineers, other developers) to finalize the report, and share with the actual consumers – the stakeholders.

To summarize the experience of Power BI, it is definitely a bit complex compared to AWS QuickSight and takes some time to getting used to. Overall, it is a great solution if you want to build detailed reports.

Pros and Cons

Now that we have highlighted some key features of Power BI, let’s talk about the advantages and disadvantages of using the software. Let’s first look at some of the pros.

  • Availability / Affordability – Power BI in general is quite affordable to use. The desktop app is free for everyone, so if you want to learn about the tool and work on projects, you can easily do so without paying anything. However, if you want to use Power BI Service and all its features, you would need to purchase a pro license at minimum, starting at $13.70. For more details on pricing, please visit the mentioned link.
  • Abundant Data Sources – Power BI connects to a variety of data sources, and has great integration with Microsoft’s proprietary services such as Excel, SQL Server. If you ever come across a data source that you’ve not heard of, and want to see if it is compatible with Power BI, there is a good chance it will be available in the list.
  • Monthly Updates – The good thing about Power BI is that it is updated every single month. The developers over at Microsoft are constantly adding or improving features all the time, so over time Power BI has become a refined product.
  • Mainstream BI Tool – Since Power BI is one of the leading BI solutions out in the market, the support in the community is quite impressive. If you ever run in to a problem, there is a good chance other have come across it too, and you can easily google the problems to find their solutions. On top of this, there are tons of resources online from where you can learn about the tool. You can watch videos on YouTube, read articles, study courses on various websites to learn more about the tool. Personally, I use SQLBI and DAX Guide quite a lot, and also YouTube Channels such as Curbal to learn more about the features within Power BI. This tool has also become a major requirement in the market if you want to become a Data Analyst / BI Developer. So, if you know how to work on the tool, it would definitely help you stand-out in the interview process.
  • Custom Visuals – One key advantage of using Power BI Desktop is its ability to use Custom visuals. If your requirement cannot be fulfilled with the default visuals, you can always visit the store and search for the visual type you’re looking for. Although I would mention here that it takes time to search for a particular visual, it’s still safe to say that the tool has a variety of visuals to choose from.

Here are some of the cons of using Power BI. Of course, there are minute details, but these are the major issues I can think of:

  • Takes Time to Learn – As Power BI comes with a lot of features, it definitely takes some time to become good at it. You are constantly learning new things as you encounter various situations. Power BI is not just a simple drag-and-drop type of BI tool; it comes with a complete suite of products and features. So, learning most of the things comes with experience and effort.
  • Performance – One thing I have noticed over time with Power BI is the performance. With big data, your reports can become quite slow. In order to reduce the issue, you have to optimize the reports by applying various techniques. Applying the optimization techniques requires knowledge and experience, so if you’re new to the BI domain, optimization can become a major hurdle and you can end up with a report that the end-user can’t even see.

Power BI vs QuickSight: The Comparison

Now that we have highlighted the key features, as well as pros and cons of using the two BI solutions, we are ready to make a brief comparison between the two. To make things simple, I have created this table which highlights the major differences between them.


The two BI solutions that we have discussed here have their unique features and target markets. Both have their ups and downs, and if I have to pick a tool, I would definitely choose Power BI because of the reasons I have highlighted in the article. Power BI definitely fits the need of majority of the users, but you can always use QuickSight if your reporting is simple and you do not require all the features that Power BI provides. We do, however, hope that QuickSight covers up for the lost time and catches up to its competitors by adding new features consistently over time, so that it challenges the top contenders and takes a fair share of the market.

Using Typescript with React Native

150 150 DevGate

Author: Shaban Qamar,

We all love JavaScript as it is the common language to build react native apps. But some of us also love types. Luckily, options exist to add stronger types to JavaScript. Our favorite is Typescript, but React Native Supports Flow out of the box. Today, we’re going to look at how to use Typescript in React Native apps.

Commands which are used

To create react-native with JavaScript we use this command:
npx react-native init

To create react-native app with typescript we use this command:
npx react-native init –template react-native-template-typescript
However, there are some limitations to Babel’s TypeScript support


Since you might be developing on one of several different platforms, targeting several different types of devices, basic setup can be involved. You should first ensure that you can run a plain React Native app without Typescript. When you’ve managed to deploy to a device or emulator, you’ll be ready to start a Typescript React Native app.
You will also need Node.js, npm, and Yarn.


Once you created the basic React Native project, you’ll be ready to start adding TypeScript. Let’s go ahead and do that.

Adding TypeScript

The next step is to add TypeScript to your project. The following commands will:

  • add TypeScript to your project
  • add React Native TypeScript Transformer to your project
  • initialize an empty TypeScript config file, which we’ll configure next
  • add an empty React Native TypeScript Transformer config file, which we’ll configure next
  • adds typings for React and React Native

The tsconfig.json file contains all the settings for the TypeScript compiler. The defaults created by the command above are mostly fine, but open the file and uncomment the following line:
/* Search the config file for the following line and uncomment it. */
// “allowSyntheticDefaultImports”: true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */

The rn-cli.config.js contains the settings for the React Native TypeScript Transformer. Open it and add the following:
module.exports = {
getTransformModulePath() {
return require.resolve(‘react-native-typescript-transformer’);
getSourceExts() {
return [‘ts’, ‘tsx’];

Rename the generated App.js and App.js files to App.tsx. index.js needs to use the .js extension. All new files should use the .tsx extension (or .ts if the file doesn’t contain any JSX).

If you tried to run the app now, you’d get an error like object prototype may only be an object or null. This is caused by a failure to import the default export from React as well as a named export on the same line. Open App.tsx and modify the import at the top of the file:

import React, { Component } from ‘react’;
import React from ‘react’
import { Component } from ‘react’;

Adding TypeScript Testing Infrastructure

React Native ships with Jest, so for testing a React Native app with TypeScript, we’ll want to add ts-jest to our devDependencies.
Then, we’ll open up our package.json and replace the jest field with the following:
“jest”: {
“preset”: “react-native”,
“moduleFileExtensions”: [
“transform”: {
“^.+\\.(js)$”: “/node_modules/babel-jest”,
“\\.(ts|tsx)$”: “/node_modules/ts-jest/preprocessor.js”
“testRegex”: “(/__tests__/.*|\\.(test|spec))\\.(ts|tsx|js)$”,
“testPathIgnorePatterns”: [
“cacheDirectory”: “.jest/cache”
This will configure Jest to run .ts and .tsx files with ts-jest.
Ignoring More Files

Installing Dependency Type Declarations

To get the best experience in TypeScript, we want the type-checker to understand the shape and API of our dependencies. Some libraries will publish their packages with .d.ts files (type declaration/type definition files), which can describe the shape of the underlying JavaScript. For other libraries, we’ll need to explicitly install the appropriate package in the @types/ npm scope.

For example, here we’ll need types for Jest, React, and React Native, and React Test Renderer.

yarn add –dev @types/jest @types/react @types/react-native @types/react-test-renderer

We saved these declaration file packages to our dev dependencies because this is a React Native app that only uses these dependencies during development and not during runtime. If we were publishing a library to NPM, we might have to add some of these type dependencies as regular dependencies.

Ignoring More Files

For your source control, you’ll want to start ignoring the .jest folder. If you’re using git, we can just add entries to our .gitignore file.

# Jest

As a checkpoint, consider committing your files into version control.
git init
git add .gitignore # import to do this first, to ignore our files
git add .
git commit -am “Initial commit.”

After adding and doing all the steps told above you are good to go you can create screens and components just like you do in JavaScript but remember you are no longer working on JavaScript its typescript so you have to work according to the environment you just have set up.
To run the project just type the following command:

For Android:

npx react-native run-android

For IOS:

npx react-native run-ios

GitHub Branching Strategy

150 150 DevGate

Author: Muhammad Raza Saeed,

There is already a lot of contention and debate around using Git Flow vs GitHub Flow branching model since there are trade-offs to using either. This is a concise summary.

  • For teams who must make formal releases on a longer time scale (a few weeks to a few months between releases) and be able to perform hotfixes, maintenance branches and other things that emerge from shipping so infrequently, git-flow makes sense.
  • It is advisable to choose something simpler, like GitHub Flow, for organizations who have established a culture of shipping, push to production frequently (if not daily), and are constantly testing and deploying.

However, comparing Git Flow vs GitHub Flow is not the goal of this report. The purpose of this is to promote the use of the most straightforward Branching Model that will work for all potential project teams. The “Branching Strategy” and the GitHub “Workflows” across projects need to be standardized immediately utilizing the “Perspective” method. Utilizing the Git Flow model is recommended.

Key Benefits of Git-Flow Branching Model:

  • Parallel Development:
    GitFlow is useful because it isolates new development from completed work, which makes parallel development simple. Future branches are used to work on new features and non-emergency bug fixes, and they are only merged back into the main body of code if the developer is satisfied that it is ready for release.
    Despite the possibility of interruptions, all you must do to go from one task to another is commit your modifications and then make a new feature branch for it. Check out your original feature branch once the task is complete to pick up where you left off.
  • Collaboration:
    Feature branches also make it simpler for two or more developers to work together on the same feature because each feature branch only contains the changes required to make the new feature functional, that makes it very simple to view and understand what each collaborator is doing.
  • Release Staging Area:
    As the new development is completed, it gets merged back into the develop branch, which is the staging area for all completed features that haven’t yet been released. So when the next release is branched off to develop, it will automatically contain all of the new stuff that has been finished.
  • Support For Emergency Fixes:
    GitFlow supports hotfix branches branches made from a tagged release (or master branch). You can use these to make an emergency change, safe in the knowledge that the hotfix will only contain your emergency fix. There’s no risk that you’ll accidentally merge a new development at the same time.

Branches Explained:

Main  Branches:

  • Master – this branch will confirm stable code running in production. Projects should consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.
  • Develop – this is often referred to as the “integration branch”. It is also the starting point of the feature. When the source code in the origin/develop branch reaches a stable point and is ready to be released, projects should create a release branch.

Supporting Branches:

  • Feature – every time there’s a new feature to be implemented, a new branch needs to be created following this pattern feature/<Jira_storyID>-<summary>. Must merge back into the develop branch.
  • Release – ideally this branch should be used for UAT releases. The key moment to branch off a new release branch from develop is when the develop branch reflects the desired state of the new release. Should be merged to the master branch once a release (or UAT) is complete and all UAT fixes should be backward merged into develop.
  • Bugfix – a bugfix branch should be used for fixing UAT bugs. They are branches from release branches and once the UAT bug is fixed, change is merged back to the release branch.
  • Hotfix – a hotfix branch is a lot like release branches and feature branches except they are branched from master instead of develop. When a critical bug in a production version must be resolved immediately, a hotfix branch needs to be branched off from the corresponding tag on the master branch that marks the current production version.

Supporting Branches – Prefix Conventions:

  • Feature -> feature/**
  • Release -> release/**
  • Hotfix -> hotfix/**
  • Bugfix -> bugfix/**

Conventions of Git-Flow Approach:

  • Using short lived branches.
  • When a feature is completed a Pull Request is created to merge with develop branch. This allows code review and integration tests to be verified before merging.
  • The new feature to be developed needs to follow similar syntax like feature/<Jira-storyID>-<summary>
  • The hotfix to be developed needs to follow the syntax like hotfix/<IssueID>-<summary>
  • The release to be created needs to follow the syntax like release/<version>
  • The develop branch is the main developer’s integration branch.
  • The master branch always reflects the current code from production.

SDLC Overview:

Development Phase:
New development (new feature, sprint bugs) is built into feature branches. Feature branches are branched off from develop branch, and finished features and fixes are merged back into the develop branch once ready.

UAT and GO Live Phase:
When it is time to make a release, a release branch is created from develop. The code in the release branch is deployed onto UAT. UAT bugs are recreated and fixed in the bugfix branch. This deploy -> test -> fix -> redeploy -> retest cycle continues until customer is happy that release is good enough to release into production for end users.
When the release is finished, the release branch is merged into master. And backward merged into develop to make sure that any changes made in the release branch aren’t accidentally lost by new development.

Post GO Live Phase:
The master branch has the production code. Therefore, it is important to tag the master branch with version of the production release. The only commits to master are merges from release branches and hotfix branches.