高品質Microsoft DP-600|ハイパスレートのDP-600復習教材試験|試験の準備方法Implementing Analytics Solutions Using Microsoft Fabric資格講座人生はさまざまな試しがある、人生の頂点にかからないけど、刺激のない生活に変化をもたらします。あなたは我々社の提供する質高いMicrosoft DP-600問題集を使用して、試験に参加します。もし無事にDP-600試験に合格したら、あなたはもっと自信になって、更なる勇気でやりたいことをしています。 Microsoft Implementing Analytics Solutions Using Microsoft Fabric 認定 DP-600 試験問題 (Q140-Q145):質問 # 140
You have a Fabric warehouse that contains a table named Staging.Sales. Staging.Sales contains the following columns.
You need to write a T-SQL query that will return data for the year 2023 that displays ProductID and ProductName arxl has a summarized Amount that is higher than 10,000. Which query should you use?
A.
B.
C.
D.
正解:B
解説:
The correct query to use in order to return data for the year 2023 that displays ProductID, ProductName, and has a summarized Amount greater than 10,000 is Option B. The reason is that it uses the GROUP BY clause to organize the data by ProductID and ProductName and then filters the result using the HAVING clause to only include groups where the sum of Amount is greater than 10,000. Additionally, the DATEPART(YEAR, SaleDate) = '2023' part of the HAVING clause ensures that only records from the year 2023 are included.
References = For more information, please visit the official documentation on T-SQL queries and the GROUP BY clause at T-SQL GROUP BY.
質問 # 141
You have a Fabric tenant that contains a warehouse.
You are designing a star schema model that will contain a customer dimension. The customer dimension table will be a Type 2 slowly changing dimension (SCD).
You need to recommend which columns to add to the table. The columns must NOT already exist in the source.
Which three types of columns should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct answer is worth one point.
A. a surrogate key
B. an effective end date and time
C. a natural key
D. a foreign key
E. an effective start date and time0
正解:A、B、E
解説:
To create SCD type 2 one needs to add a surrogate key + start/end date beside the other technical attributes. https://learn.microsoft.com/en-u ... g-dimensions-azure- synapse-analytics-pipelines/3-choose-between-dimension-types
質問 # 142
You have a Fabric warehouse that contains a table named Sales.Products. Sales.Products contains the following columns.
You need to write a T-SQL query that will return the following columns.
How should you complete the code? To answer, select the appropriate options in the answer area. 正解:
解説:
Explanation:
* For the HighestSellingPrice, you should use the GREATEST function to find the highest value from the given price columns. However, T-SQL does not have a GREATEST function as found in some other SQL dialects, so you would typically use a CASE statement or an IIF statement with nested MAX functions. Since neither of those are provided in the options, you should select MAX as a placeholder to indicate the function that would be used to find the highest value if combining multiple MAX functions or a similar logic was available.
* For the TradePrice, you should use the COALESCE function, which returns the first non-null value in a list. The COALESCE function is the correct choice as it will return AgentPrice if it's not null; if AgentPrice is null, it will check WholesalePrice, and if that is also null, it will return ListPrice.
The complete code with the correct SQL functions would look like this:
SELECT ProductID,
MAX(ListPrice, WholesalePrice, AgentPrice) AS HighestSellingPrice, -- MAX is used as a placeholder COALESCE(AgentPrice, WholesalePrice, ListPrice) AS TradePrice FROM Sales.Products Select MAX for HighestSellingPrice and COALESCE for TradePrice in the answer area.
質問 # 143
You have a Microsoft Power Bl project that contains a file named definition.pbir. definition.pbir contains the following JSON.
For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. 正解:
解説:
Explanation:
We are analyzing the JSON for definition.pbir in a Power BI Project:
{
"version": "1.0",
"datasetReference": {
"byPath": {
"path": "../Sales.Dataset"
},
"byConnection": null
}
}
Statement 1:
"definition.pbir is in the PBIR-Legacy format."
No # The legacy PBIR format references datasets by byConnection (pointing to the Power BI service).
Here, it uses byPath, which is the new project format, not legacy.
Statement 2:
"The semantic model referenced by definition.pbir is located in the Power BI service." No # The JSON shows byPath: "../Sales.Dataset", meaning the semantic model is referenced locally within the project folder (Sales.Dataset), not in the Power BI service.
Statement 3:
"When the related report is opened, Power BI Desktop will open the semantic model in full edit mode." Yes # Since the semantic model is referenced by path (local PBIP project files), Power BI Desktop will open the model in full edit mode. If it had been byConnection, it would open in live connect mode to the service instead.
Final Answer:
PBIR-Legacy format # No
Semantic model in Power BI service # No
Opens in full edit mode # Yes
References:
Power BI Project (PBIP) structure
PBIR formats: byPath vs byConnection
質問 # 144
You are creating a dataflow in Fabric to ingest data from an Azure SQL database by using a T-SQL statement.
You need to ensure that any foldable Power Query transformation steps are processed by the Microsoft SQL Server engine.
How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point. 正解:
解説:
Explanation:
You should complete the code as follows:
* Table
* NativeQuery
* EnableFolding
In Power Query, using Table before the SQL statement ensures that the result of the SQL query is treated as a table. NativeQuery allows a native database query to be passed through from Power Query to the source database. The EnableFolding option ensures that any subsequent transformations that can be folded will be sent back and executed at the source database (Microsoft SQL Server engine in this case).