Microsoft Business Intelligence is designed to empower business users through the BI Tools.
Users can create their own reports and analysis (examination and evaluation of relevant information) and easily share and collaborate with their colleagues.
MSBI is designed with IT manageability and cost effectiveness.
* A flat file that stores data in a plain text file.
* Each line of the text file holds one record, with fields separated by delimiters, such as commas or tabs.
* Flat files are data files that contain records with no structured relationships.
Raw Files :
* A raw file is a collection of unprocessed data(Binary format).
* This means the file has not been altered, compressed, or manipulated in any way by the computer.
* Raw files are often used as data files by software programs that load and process the data.
_similarity : How many similar indicates or How much indicates the lookup rows.
_confidence : How much confidence in the system is for identifying the similarities.
In SSIS there is one tasks Execute TSQL task which is similar to Execute SQL task. Will see what is the difference between two.
Execute TSQL Task:
Pros: Takes less memory, faster perfomance
Cons: Output into variable not supported, Only supports ADO.net connection
Execute SQL Task:
Pros: Support output into variables and multiple types of connection, parameterized query possible.
Cons: Takes more memory, slower performance compared to TSQL task.
A task will only execute if the condition that is set by the precedence constraint preceding the task is met. By using these constraints, it will choose different execution paths depending on the success or failure of other tasks.
Success – Workflow will proceed when the preceding container executes successfully. Indicated in control flow by a solid green line.
Failure – Workflow will proceed when the preceding container’s execution results in a failure. Indicated in control flow by a solid red line.
Completion – Workflow will proceed when the preceding container’s execution completes, regardless of success or failure. Indicated in control flow by a solid blue line.
Expression/Constraint with logical AND – workflow will proceed when specified expression and constraints evaluate to true. Indicated in control flow by a solid color line along with a small ‘fx’ icon next to it. Color of line depends on logical constraint chosen (e.g. success = green, completion = blue).
Control flow is for designing the flow of the package. Data flow is for ETL process.Data Flow is the subset of control flow
There will be only one control flow while multiple dataflow can exists.Data flow cannot work without a control flow
All process base tasks are part of control flow while ETL related tasks are the part of Dataflow which is again a subset of control flow.
Checkpoint is the property in SSIS which enables the project to restart from the point of failure. When we set the property to true package create the checkpoint file which stores the information about package execution and use to restart package from the point of failure. So, If the package runs successfully, the checkpoint file is deleted, and then re-created the next time the package runs.
The Bulk Insert task can transfer data only from a text file into a SQL Server table or view.
If the destination table or view already contains data, the new data is appended to the existing data when the Bulk Insert task runs.
we cannot perform any ETL operations during the data transfer.
Destination table should be already present in sql server.
* The Import Column transformation is used to load binary data (photos, documents, media etc.) from the file system into a Data Flow.
* From the Data Flow it can then be loaded into a table, moved to a different location, modified – whatever you need to do.
* It is the reverse of the Export Column transformation, which pushes binary data out of a Data Flow and into the file system.
* The Export Column transformation reads data in a data flow and inserts the data into a file.
* For example, if the data flow contains product information, such as a picture of each product, you could use the Export Column transformation to save the images to files.
* Specify the data columns and the columns that contain the path of files to which to write the data.
* Specify whether the data-insertion operation appends or truncates existing files.
1.Database systems designed for Transactions purpose.
2.Data is frequently modified (Volatile data).
3.More number of users.
4.Normalized tables maintained.
5.More no.of Indexes not recommended.
7.Database Systems maintain current data (< one year).
8.Small amount of data.
1.DWH Systems designed for Analysis and Reporting purpose.
2.Data modifications are not frequently. (Non volatile data).
3.Less number of users.(Data Analyst Team)
4. Denormalized tables are maintained.
5.More no.of Indexes recommended.
7.DWH maintain historical data ( 1 – 10 years of data).
8.Huge data (Very large database).
Answer to this question varies from project to project and mainly depends on how BIG is your database and how COMPLEX the database design. So, Generally for the database with a TRANSACTION TABLE of 50 cores records, the cube size will be around 100GB. So, better go with 100GB as answer to this question.
The database size will be 600 to 800GB for which the cube will come to 100 GB. So go with 800GB for this question.
You have to be very careful here.
Frequency of processing cube depends on HOW FREQUENTLY YOU ARE GETTING NEW DATA?
Every day at 10pm IST. we get new data either Daily or Weekly.
There are couple of this where we face difficulty.
* While working on RELATIONSHIPS between Measure Groups and Dimensions.
* Working on Complex calculations
* Performance tuning
* Working on KPI creations
* Working on partitions.
5 tables, around 50 dimensions attributes
Total 10 to 20 .
My data base design doesn’t support STAR Schema.
We have 6 types of Relationship:
e.many to many Relationship
Only 2 and one in morning shift and another in Evening shift.
This is Very very important question.This again depends on the Size of database,Complexity of the database and your server settings. For database with 50 cr transaction records, it generally takes 3 hrs.
I answer more than 100 and if you tell the same then you are caught unless you are super good in MDX. Best answer for you is “Worked on 50 calculations”.
Get more questions and answers from onlineitguru trainers after completion of MSBI online training.
to our newsletter
As we know, that Selenium with Python Web Browser Selenium Automation is Gaining Popularity Day by Day. So many Frameworks and Tools Have arisen to get Services to Developers.
Artificial Intelligence, Deep mastering (DL) is completely about, many levels of Representation and sort of abstraction. That guide to design a sense of Information like Images, sound and text format.
Over last few years, Big Data and analysis have come up, with Exponential and modified Direction of Business. That operate Python, emerged with a fast and strong Contender for going with Predictive Analysis.
Understanding and using Linear, non-linear regression Models and Classifying techniques for stats analysis. Hypothesis testing sample methods, to get business decisions.
Everyone starts Somewhere, first you learn basics of Every Scripting concept. Here you need complete Introduction to Data Science python libraries Concepts.
As we Know Azure DevOps is a Bunch of Services, in guiding Developers. It contains CI/CD, pipelines, code Repositories, Visual Reporting Tools and more code management with version control.