Summary: 18-Years of project-based multi-dimensional experience as a Data Architect/Engineer/Scientist, Network/Cloud Administrator, and Rest/ETL/ELT developer in public and private sector leading organizations like Lok Sanjh Foundation, SANFEC (South Asia Network on Food Ecology and culture), RDBC, University of Agriculture, Faisalabad, G.C. University, Faisalabad, Tianjin University, Lyftrondata INC., and Xgrid co. Experience Areas:
Data Architect/Engineer/Scientist: Data management, Data modeling, Data Governance, Data analysis, Esurance of Data quality, Query processing, Data partitioning, and data analysis and visualization using big data technologies, Machine learning-based and deep-learning application and predictions, Query optimization with A.I. approaches, Relational Databases, Deployment of Relational algebra for query processing, flattening and extraction of JSON, XML. RDF, CSV, and Text Files. Relations mapping of RDF data. SPARQL query translation to SQL. Query processing of SPARQL over Spark Core, SparkSQL, YARN (map-reduce), and Hadoop HDFS.
Project Management: Project Planning, designing, implementation, testing, and production with agile and waterfall approach, Development of PC-IV(Final Outcome) and PC-1(Feasibility Report), Time framing, Microsoft Project, confluence, Jira, Kanban GitHub, Trello, Azure DevOps, MLOpps, DataOpps
Machine Learning and Deep Learning: ML Modelling (CNN, RNN, LSTM, A+, DCSCN, PPCN, SRCNN, SelfExSR, MSCN, etc.) and NLP Natural Language Processing. Image Identification and Number plate, Super-resolution, Reconstruction of high resolution & Face recognition, Streaming Frameworks, Data extraction from streams, Application of engineering approach for data extraction cleansing, Text mining with Deep learning-based Decision support system and predictions,
ETL/ELT Developer: Use of Big data technologies (Hadoop ecosystem, Apache Spark Core, SparkML, SparkSQL, Apache IMPALA, Apache DRILL, Apache Dream, Apache Hive, Apache Ozie, Apache Hue, Apache KAFKA, Apache Parquet, Apache Kudu, Apache Flume, TensorFlow, etc.), for ELT/ETL development, Full stack Development for ETL/ELT development, AWS Glue, Athina, Kinesis, Debezium with Kafka, ODI (Oracle Data Integration
Virtualization and Clouds: Expert in Vmware, Oracle VM VirtualBox, Docker, and Kubernetes. Dockerizing the Full stack Development. Cloudera Cluster, Spark Cluster, Kafka Cluster, Hadoop cluster on Docker and Kubernetes. Linux systems, DNS, DHCP, SAMBA and file sharing, XRDP, web-based VM management, and access to VMs.
Networks and data centers: TCP/IP, IPX, AppleTalk, Decent, RIP, IGRP, EIGRP, BGP, OSPF, IS-IS, IPsec, VPN, Multicast, PIM, IGMP, CGMP, PNNI, ATM, Frame Relay, Ethernet, DL Sw, IEEE 802.11b (Wi-Fi), RSRB, STUN, LANE, HSRP, Token Ring, VLAN, NAT, Spanning Tree, ISL, CDP, HDLC, PPP, ISDN BRI/PRI, T1/E1, DS3, SONET, OC-3/OC-12/OC-48, V.35, RS232
Achieved the expert level in Planning and Designing Data centres. I experienced different standards like one unit data rack with precision cooling, a data centre with raised floor and structure cabling, precision air conditioning, ecological data centres, air circulation, power management plan, etc.
RDS/NoSQL databases and Storage: Oracle 19c, Oracle Cloud Infrastructure (OCI), Snowflake, Aws Redshift, MySQL, SQLServer, NoSQL: HBase, GBase, MongoDB, Apache Cassandra, Vitess. Storage: Oracle Cloud Infrastructure (OCI) buckets, S3, GCP storage, HDFS., Azure Storage.
Programming Languages: SCALA. JAVA, Python, C++, and CUDA programming
Cloud Serverless Environment: GCP, AWS and OCI services
Xgrid. co Islamabad
Role:
a. ETL/ELT architecture for Oracle Cloud Infrastructure (OCI) and GCP as the target
b. Configure the Kafka cluster with golden gate and ODI for CDC pipelines
c. Complex Mappings over ODI
d. Extraction of data from various RDS to Oracle Cloud Infrastructure (OCI)
e. Use of in-memory computing for ETL connectors and transformation with spark
f. Prevision of Jupyter and spark notebook for customized
g. Effective use of DataProc, Dataflow, Data fusion and auto ML in GCP
 Development of cloudera cluster and big data programing in scala and python
DATA ARCHITECT: (January 2022 to June-2022)
Lyftrondata INC. 44330 Mercure Cir, Dulles, VA 20166, 855-593-8766, hello@lyftrondata.com
a. Define architecture for ELT/ETL application and streaming frameworks
b. Development of in-house ETL applications.
c. Dockerizing the applications (Data Sync and Data Mirroring)
D. PROJECTS:
A. NMC WAREHOUSE
We are designing of data warehouse for Psychometric data over the Snowflake.
a. Develop External stages by developing the Snow pipe with S3 and Google Cloud platform
b. The transformations are loaded into the internal stage
c. Application of dimensional model for normalization
d. Application Deep learning approaches for predictions of the candidate's future
B. IBMS (ITEM BANK MANAGEMENT SYSTEM)
a. Capturing the change in data from REST API of TAO (web Application) with KAFKA
b. Materialization, Transformation, Loading
c. Application of dimensional model for normalization
Active reporting over Power BI
Work as acting director IT and incharge of all IT and Software services, I was the team leader of all software and network development projects.
Responsible for the development of dynamic web development and to lead the development team