November 19, 2021

Top 20 Oracle ADF Interview Questions and Answers

 

Ques: 1). What is Oracle ADF, and how does it work?

Answer: 

The Oracle Application Development Framework (Oracle ADF) is an end-to-end application framework that simplifies and accelerates the implementation of service-oriented applications by leveraging J2EE standards and open-source technologies. Oracle ADF can help you construct enterprise applications that use web, wireless, desktop, or web services interfaces to search, display, generate, change, and validate data. With drag-and-drop data binding, visual UI design, and team development tools built-in, Oracle JDeveloper 10g and Oracle ADF provide an environment that covers the whole development lifecycle from design to deployment.


Oracle Fusion Applications interview Questions and Answers


Ques: 2). What is Managed Bean, and how does it work?

Answer: 

Managed beans are JavaBean objects that are managed by a JSF implementation. A managed bean represents the process of creating and managing a bean. It has nothing to do with the functionality of the bean.

A managed bean is one that is produced and initialised in a controlled manner. JSF, as you may know, employs a lazy initialization model. It means that the bean in a specific scope is created and initialised on-demand, that is, when the bean is requested for the first time.


Oracle Accounts Payables Interview Questions and Answers


Ques: 3). What exactly is task flow?

Answer: 

ADF task flows are a modular way to defining application control flow. You can break up an application into reusable task flows rather than portraying it as a single huge JSF page flow. A piece of the application's navigational graph is contained in each task flow. The activities are the nodes in the task flows. A simple logical process, such as displaying a page, running application logic, or calling another task flow, is represented by an activity node. Control flow cases are the interactions between the activities.


Oracle Access Manager Interview Questions and Answers


Ques: 4). What are the benefits of task flow over JSF flow?

Answer: 

  • ADF task flows have a number of advantages over JSF page flows.
  • The app can be divided into a set of modular processes that communicate with one another.
  • Views, method calls, and calls to other task flows can all be added to the task flow diagram.
  • Navigation is used to move between sites and other operations, such as routers.
  • Task flows created using ADF can be reused inside the same or a different application. You might decide to reuse task flows after you've broken up your application into task flows.
  • Data can be transmitted between activities inside the task flow using shared memory scope (for example, page flow scope).
  • Page flow scope defines a unique storage area for each instance of an ADF bounded task flow.


Oracle Fusion HCM Interview Questions and Answers


Ques: 5). What is Adf Event handling and how does it work?

Answer: 

Event handling is normally done on the server in classic JSF applications. JSF event handling is based on the JavaBeans event paradigm, in which the JSF application uses event classes and event listener interfaces to handle events provided by components.

Clicking a button or link, selecting an item from a menu or list, and altering a value in an input field are all examples of user events in an application. When a user activity, such as pressing a button, occurs, the component creates an event object, which holds information about the event and identifies the component that caused it. An event queue is also created for the event. JSF instructs the component to broadcast the event to the relevant registered listener at the appropriate point in the JSF lifecycle, which then calls the listener method to process the event. The listener method can either update the user interface or call backend application logic.

ADF Faces command components, like regular JSF components, send out ActionEvent events when they're triggered, while ADF Faces input and select components send out ValueChangeEvent events when their local values change.


Oracle SCM Interview Questions and Answers


Ques: 6). What is Association Accessor?

Answer: 

It's a method for an entity instance at one end of an association to communicate with the linked entity object instance at the other end. A source assessor travels from the destination to the source, while a destination assessor travels from the source to the destination.

It is described in the entity object definition xml files, which can be used to specify cross entity relationships by view object and view link definition. Its return type is either the associated entity object definition's entity object class or 'EntityImpl' if the associated entity object definition does not have an entity object class.


Oracle Financials Interview questions and Answers


Ques: 7). What is the difference between Datacontrol.dcx and Databindings.cpx?

Answer: 

The Oracle ADF binding context for your entire application is contained in the DataBindings.cpx file, which also contains the metadata from which the Oracle ADF binding objects are constructed at runtime. When you register data controls on business services, the DataControls.dcx file is created. Oracle ADF Business Components does not generate this file. It specifies the Oracle ADF model layer data control classes (factory classes) that help the client and the accessible business service interact.


Oracle Cloud Interview Questions and Answers


Ques: 8). Why did you choose ADF?

Answer: 

I had already developed a strong interest in technology with the goal of making a substantial impact. As the main development framework, ADF may be used to create virtually any form of enterprise application. I became interested in this technology since it is so beneficial. Of course, this is in keeping with my goals and ambitions.

Second, ADF is widely recognised as the most advanced J2EE development framework. As a result, the ADF 11g is used to build the Oracle Fusion Middleware stack. This is why we have so many options here.

Finally, ADF contains a number of built-in components that help us with the work of developing code. This way, I can actually focus on the features of the application and how it can do more good to the business.


Oracle PL/SQL Interview Questions and Answers


Ques: 9). What is the difference between a binding context and a binding container?

Answer: 

The binding context, which is used to access the binding layer, is a runtime map between data controls and page definitions of pages in the application. The EL expression on your jspx pages gives you access to it. The page bindings are instantiated using the binding container, which is a request-scoped map. The EL expressions provide access to this. It is also accessible throughout every page request because it is a request-scoped map.


Oracle SQL Interview Questions and Answers


Ques: 10). What is the difference between Visible and Render property?

Answer: 

The visible property is set to true or false depending on whether the field should be visible on the page at run time. Even though the field or component is hidden, it still exists on the page.

The render attribute is used to load the component conditionally based on a set of criteria.


Oracle RDMS Interview Questions and Answers


Ques: 11).  In Adf, how do you define pagination?

Answer: 

In ADF, we establish custom pagination by utilising the af:iterator tag to create a custom table as a taskflow. This renders the data collection in the same way that a table does. Now, using the ADF bindings declaration, we tie the value property of the iterator to the collection model and set the number of visible rows to 15, for example.


BI Publisher Interview Questions and Answers

 

Ques: 12).  What is the Adf Lifecycle and how does it work?

Answer: 

The ADF Lifecycle is divided into nine stages:

1.      Set the Context: The lifecycle, the binding container, and other values are all set.

2.      Create the Model: The application model is created by providing it with the necessary parameters. ADF is also provided the necessary instructions.

3.      Apply Input Values: This phase handles the application's access request.

4.      Validate Input Values: Verifies the values specified in phase 3 of the request.

5.      Update Model: Data is stacked up and updated when the input values have been validated.

6.      Validate Model Updates: Validate the updated input values.

7.      Process Component Updates: This is where all activities involving input values are handled.

8.      Metadata Commit: Commits the metadata runtime to the model.

9.      Prepare Render: All the activities and the final page is sent to render.

 

Oracle 10g Interview Questions and Answers


Ques: 13).  What is the difference between Valuators and Converters?

Answer: 

To give conversion and validation capabilities to the ADF input components, Valuators and Convertors are used.

After the values on ADF forms are changed and submitted, converters convert the values to the type that the application accepts.

Valuators are used to enforce input component validations.

 

Ques: 14). What is Partial Page Rendering, and how does it work?

Answer: 

PPR (Partial Page Rendering) is quite similar to Ajax. PPR is a dynamic system that takes appropriate action as needed. It triggers the page when it needs to be triggered. It updates the page when it needs to. Once all of the activities are in order, PPR submits the page.

You'll need to do the following to allow partial page rendering:

Auto-submit: When auto-submit is on or set to true, the page automatically submits the values updated or changed on the page.

PartialSubmit: When the partialSubmit is set to true, the changes and updates on the page submit the page partially.

Partial trigger: When the partial trigger is on, the page renders are simply listed. Once the triggers are settled, final changes are considered.

 

Ques: 15).   Is ADF a better framework than JSF?

Answer: 

The ADF framework is based on the JSF. When it comes to processing any business application, ADF is quite smooth.

Let's look at it from the user's perspective:

The ADF possesses the following:

  • The UI is simple and straightforward to use.
  • Even if they are viewing it for the first time, anyone can readily access it.
  • Almost everything is in one location.
  • The appearance and feel are smooth and solid.
  • From the developer's perspective:
  • It's a simple platform on which to build an application.
  • More than a hundred components that are ready to use
  • Components that can be edited to alter the functionality
  • Simple to keep up with
  • Simple to set up
  • Simple to use
  • Drag and drop functionality is available.

As ADF is built on JSF, any time ADF will give you a nicer and smoother performance. JSF is, nonetheless, a very good platform too. But when it comes to comparison, ADF is better.

 

Ques: 16). What is the Role Of View Object, Entity Object, and Application Module?

Answer: 

View object: View objects show the modifications made by the user to Taskflows projects. VO aids the system in sorting through the user's actions. And it organises, filters, and maintains the task so that SQL queries may quickly discover the results when they're looking for them.

Entity object: In a database, an entity object is a row. EO is the ADF, similar to EJB in J2EE. The values in the row are also easy to recognise with EO. It encapsulated the entire row in a specific value. As a result, searching for any value in the row becomes much easier. Even two EOs might be related to one another and grouped together based on their relationship. Entity Associations are a group of connected EOs.

An application module allows a customer to access their work via the platform. AM contains all of the essential and top-level processes, as well as the most up-to-date information. AM is sometimes referred to as the transactional module because it lets clients to access their own work. In the View link form, there are also numerous VO and Entity Associates.

 

Ques: 17). What does it mean to be a Phase Listener?

Answer: 

The Oracle ADF lifecycle is fully integrated with the JavaServer Faces request lifecycle, including everything needed to set up the binding context, prepare the binding container, validate and update the ADF model, persist MDS changes, and prepare the response.

An ADF Phase Listener can be used by developers that need to listen to and engage with the request cycle. The ADF Phase Listener, unlike the Phase Listener defined in the faces-config.xml file, allows you to listen to both the standard and ADF phases. Of course, the ADF Phase Listener is written in Java and configured in the adf-settings.xml file you must generate.

The ADF PagePhaseListener can be used in any situation.

Developers can utilise either a conventional JSF listener or a particular ADF listener in the Oracle ADF framework, which provides extra ADF-specific page cycle enhancements. The ADF Lifcycle can be customised with listeners.

 

Ques: 18). What is inter-portlet communication, and how does it work?

Answer: 

When an action in one portlet causes a reaction in another, this is known as inter-portlet communication. It serves as a link between two portlets. One portlet, for example, has a checkbox with a list of products. When I select a product from the list and press the submit button, the other portlet displays the product's data.

 

Ques: 19). What is the meaning of Association Accessor?

Answer: 

It's a method for an entity instance at one end of an association to communicate with the linked entity object instance at the other end. A source assessor travels from the destination to the source, while a destination assessor travels from the source to the destination.

It is described in the entity object definition xml files, which can be used to specify cross entity relationships by view object and view link definition. Its return type is either the associated entity object definition's entity object class or 'EntityImpl' if the associated entity object definition does not have an entity object class.

 

Ques: 20). What are the various Bean Scopesin JSF types?

Answer: 

Three Bean Scopes are supported by JSF.

Request Scope: The request scope is only valid for a certain time. It begins with the submission of an HTTP request and finishes with the delivery of the response to the client.

Session Scope: From the time a session is formed until it is terminated, the session scope persists.

Application Scope: The application scope remains constant throughout the web application's lifecycle. The scope of all requests and sessions is the same.

 

 


November 18, 2021

Top 20 MySQL Interview Questions & Answers

  

Ques: 1). I keep getting an error about a foreign key constraint failing when I run the DELETE statement. So, what do I do now?

Answer:

This suggests that some of the data you're attempting to delete is still present in another table. For example, if you have a table for universities and a table for students, both of which contain the ID of the university they attend, deleting a university table will fail if the students table still contains people who are enrolled at that university. The proper procedure is to delete the offending data first, followed by the university in question. Running SET foreign key checks=0 before the DELETE operation and then adjusting the parameter back to 1 after the DELETE command would be a quick way.

 

Ques: 2). You created a search engine that should provide ten results at a time, but you also want to know how many rows there are in total. How are you going to show that to the user?

Answer:

SELECT page title FROM web pages FROM SQL CALC FOUND ROWS LIMIT 1,10; 

SELECT FOUND ROWS(); 

The second query will tell you how many results there are (not that COUNT() is never utilised). the total, so you can put something like "Found 13,450,600 results, displaying 1-10" on the screen. It's worth noting that FOUND ROWS ignores the LIMITS you set and always returns the whole number of rows affected by the query.

 

Ques: 3). Differentiate between MyISAM Static and MyISAM Dynamic.

Answer:

All fields in MyISAM static have a defined width. To handle data types of varied lengths, the Dynamic MyISAM table would have fields such as TEXT, BLOB, and so on. In the event of corruption, MyISAM Static would be easier to restore since, even if you lose some data, you know exactly where to search for the beginning of the next record.

 

Ques: 4). What are the benefits of MyISAM versus InnoDB?

Answer:

Much more cautious disc space management - each MyISAM table is saved in its own file, which may subsequently be compressed using myisamchk if necessary. Tables in InnoDB are kept in tablespace, and there isn't much room for further optimization. Except for TEXT and BLOB, all data can only take up 8,000 bytes. InnoDB does not support full text indexing. Due to tablespace complexity, TRhe COUNT(*)s run slower than in MyISAM.

 

Ques: 5). What are MySQL's HEAP tables?

Answer:

The HEAP tables are stored in memory. They're typically utilised for high-speed, short-term storage. Within HEAP tables, no TEXT or BLOB fields are permitted. Only the comparison operators = and => can be used. AUTO INCREMENT is not supported by HEAP tables. The indexes must not be NULL.

 

Ques: 6). Difference between primary key, unique key and candidate key.

Answer:

Primary Key:- (i) It has unique value and it can’t accept null values.

(ii) We can have only one Primary key in a table.

Unique Key:- (i) It has unique value and it can accept only one null values.

(ii) We can have more than one unique key in a table.

Candidate Key:- candidate key full fill all the requirements of primary key which is not null and have unique records is a candidate for primary key. So thus type of key is known as candidate key. Every table must have at least one candidate key but at the same time can have several.

 

Ques: 7). What is the Difference Between MySQL and MySQL AB?

Answer:

The term "MySQL" is used to refer to both the MySQL database management system and the corporation that created it. MySQL AB is the full name of the firm. MySQL is the database management system (DBMS) that MySQL AB owns, develops, and sells—that is, the "MySQL" database server software and related items such client applications for talking with the server and programming interfaces for creating new clients.

The "AB" in the company name is an abbreviation for the Swedish "aktiebolag," or "stock company." As a result, the company's name is "MySQL Inc." MySQL Inc. and MySQL GmbH, for example, are subsidiaries of MySQL AB.

 

Ques: 8). What are the features of MYSQL?

Answer:

It's a pretty effective programme on its own. It can handle a significant portion of the features found in the most expensive and sophisticated database solutions.

  • It makes use of a standard version of the widely used SQL data language.
  • It's available under a free and open-source licence.
  • It is compatible with a wide range of operating systems and languages.
  • It operates swiftly and efficiently, even when dealing with massive data sets.
  • PHP has a variety of functions for working with MySQL databases.

 

Ques: 9). What are the steps you take in phpMyAdmin to modify a table?

Answer:

From the table list on the left side of the SQL screen, choose the table you want. In the main area of the screen, the table shows in a spreadsheet-like manner. In this window, we can change the table's contents.

By clicking the appropriate button displayed near the record, you can edit or remove it.

By clicking the corresponding link near the bottom of the table, you can add a row.

Press the Enter key to exit the cell you just edited. Any modifications you make to the data in the table are automatically transformed to SQL code.

 

Ques: 10). What are your thoughts on TIMESTAMP?

Answer:

The attributes DEFAULT CURRENT TIMESTAMP and ON UPDATE CURRENT TIMESTAMP of the TIMESTAMP data type enable for automated updating. Each table can only have one automatically updated TIMESTAMP field. That is, only one TIMESTAMP field with DEFAULT CURRENT TIMESTAMP or ON UPDATE CURRENT TIMESTAMP can be defined. On a single field, one or both attributes can be given.

 

Ques: 11). Is the data type DATETIME a STRING?

Answer:

DATETIME data types are commonly formatted as strings, and they can also be inputted as strings. They do, however, have numerical representations, which are visible when the value is cast into a numeric data type.

 

Ques: 12). What is the maximum number of privilege levels that can be granted?

Answer:

Privileges can be issued at five different levels.

Global - Privileges that apply to all databases on a MySQL server are known as global privileges. The mysql.user table stores these privileges.

Database - Database rights are applied to all items in a given database. The mysql.db and mysql.host tables store these privileges.

Table  - Table privileges are applied to all columns in a table. The mysql.tables priv table stores these privileges. Only table level rights are granted and revoked using the GRANT ALL ON db name.table name and REVOKE ALL ON db name.table name commands.

Column - Privileges at the column level apply to one or more columns in a table. The mysql.columns priv table stores these privileges. You must specify the same columns that were given when using the REVOKE command to remove column level access. Within parenthesis, type the column or columns for which privileges are to be provided.

Routine - For stored routines, the privileges CREATE ROUTINE, ALTER ROUTINE, EXECUTE, and GRANT apply (functions and procedures). At the global and database levels, they can be granted. These privileges can also be provided at the routine level for individual routines, with the exception of CREATE ROUTINE. The privileges are kept in the table mysql.procs priv.

 

Ques: 13). What exactly is SQLyog?

Answer:

SQLyog is perhaps the most widely used GUI tool for MySQL. The programme has been around since August 2002, and we can see how mature it is when we utilise it. This can be a very useful and practical tool for folks who come from a Windows desktop experience. There is a free community version that is somewhat limited, however it only operates on Windows desktops.

 

Ques: 14). Tell me how you back up a MYSQL database?

Answer:

With phpMyAdmin, backing up databases is simple. By clicking the database name in the left-hand navigation bar, we can choose the database we wish to back up. After that, we select the Export button and make sure that all of the tables we want to back up are highlighted.

Then, under Export, we can define the choices we desire. Make sure we can save the output by entering a filename.

 

Ques: 15). What are the differences between the heap table & temporary table?

Answer:

Heap Table:

  • Found in memory. It works as storage temporarily.
  • BLOB & TEXT fields aren’t allowed
  • Indexes should be “NOT NULL”
  • Doesn’t supports “AUTO_INCREMENT”
  • Can be shared among clients
  • Only comparison operators can be used (=,<,>,>=, <=)

Temporary Table:

  •  Used to store provisional data
  • Temporarily stored data is deleted after client session ends
  • Aren’t shared among clients
  • Special syntax is used; “create temporary table”

 

Ques: 16). Why would you want to utilise the MySQL Database Server?

Answer:

The MySQL Database Server is a fast, dependable, and simple to use database server. Anyone with access to the internet can use and alter the software. Anyone can use MySQL for free by downloading it from the Internet.

MySQL Database Software is a client/server system that includes a multi-threaded SQL server that supports a variety of backends, a variety of client programmes and libraries, administrative tools, and a variety of application programming interfaces (APIs).

 

Ques: 17). Is the phpMyAdmin interface user-friendly?

Answer:

The UI of the phpMyAdmin application is fairly user-friendly. Many administrators are at least somewhat familiar with it because it is utilised by a number of web hosting companies. Although it has a limited set of capabilities, it does offer some flexibility in that we can use a web browser to view the application from anywhere.

 

Ques: 18). What are the Common MYSQL Function?

Answer:

CONCAT(A, B) - Concatenates two string values to create a single string output. Often used to combine two or more fields into one

FORMAT(X, D) - Formats the number X to D significant digits.

CURRDATE(), CURRTIME() - Returns the current date or time.

NOW() - Returns the current date and time as one value.

MONTH(), DAY(), YEAR(), WEEK(), WEEKDAY() - Extracts the given data from a date value.

HOUR(), MINUTE(), SECOND() - Extracts the given data from a time value.

DATEDIFF(A, B) - Determines the difference between two dates-commonly used to calculate ages

SUBTIMES(A, B) - Determines the difference between two times.

FROMDAYS(INT) - Converts an integer number of days into a date value.

 

Ques: 19). What are Access Control Lists and how do they work?

Answer:

An Access Control List (ACL) is a set of permissions associated with a certain item. This list is the foundation of the MySQL server's security paradigm, and knowing it can immensely assist you in diagnosing problems with users being unable to connect.

The ACLs (also known as grant tables) are cached in memory by MySQL. MySQL validates the authentication information and permissions against the ACLs in a predetermined order when a user tries to authenticate or run a command.

 

Ques: 20). What is the "i-am-a-dummy flag" in MySQL used for?

Answer:

When we want to deny or suspend "UPDATE & DELETE" instructions unless there is a WHERE clause present, we utilise the "i-am-a-dummy" flag.




November 17, 2021

Top 20 Apache NiFi Interview Questions and Answers

  

Ques: 1). Is there a functional overlap between NiFi and Kafka?

Answer: 

This is a pretty typical question, and the situation is actually extremely complementary. When you have a large number of customers drawing from the same topic, a Kafka broker gives very low latency. However, Kafka isn't built to tackle dataflow problems - imagine data prioritization and enrichment — Kafka isn't built for that. Furthermore, unlike NIFI, which can handle messages of any size, Kafka prefers smaller messages in the KB to MB range, whereas NiFi can accept files up to GB per file or more. NiFi is an add-on to Kafka that solves all of Kafka's dataflow issues.


BlockChain Interview Question and Answers


Ques: 2). What is Apache NiFi, and how does it work?

Answer: 

Apache NiFi is a dataflow automation and enterprise integration solution that allows you to send, receive, route, alter, and modify data as needed, all while being automated and configurable. NiFi can connect many information systems as well as several types of sources and destinations such as HTTP, FTP, HDFS, File System, and various databases.


Apache Spark Interview Questions & Answers


Ques: 3). Is NiFi a viable alternative to ETL and batch processing?

Answer: 

For certain use situations, NiFi can likely replace ETL, and it can also be utilised for batch processing. However, the type of processing/transformation required by the use case should be considered. Flow Files are used in NiFi to define the events, objects, and data that pass through the flow. While NiFi allows you to perform any transformation per Flow File, you shouldn't use it to combine Flow Files together based on a common column or perform certain sorts of windowing aggregations. Cloudera advises utilising extra solutions in this situation.

The ideal choice in a streaming use scenario is to have the records transmitted to one or more Kafka topics utilising NiFi's record processors. Based on our acquisition of Eventador, you can then have Flink execute any of the processing you want on this data (joining streams or doing windowing operations) using Continuous SQL.

NiFi would be treated as an ELT rather than an ETL in a batch use scenario (E = extract, T = transform, L = load). NiFi would collect the various datasets, do the necessary transformations (schema validation, format transformation, data cleansing, and so on) on each dataset, and then transmit the datasets to a Hive-powered data warehouse. Once the data is sent there, NiFi could trigger a Hive query to perform the joint operation.


Apache Hive Interview Questions & Answers


Ques: 4). Is Nifi a Master-Server Architecture?

Answer: 

No, the 0-master philosophy has been considered since NiFi 1.0. In addition, each node in the NiFi cluster is identical. The Zookeeper is in charge of the NiFi cluster. ZooKeeper chooses a single node to serve as the Cluster Coordinator, and ZooKeeper handles failover for you. The Cluster Coordinator receives heartbeat and status information from all cluster nodes. The Cluster Coordinator is in charge of detaching and reconnecting nodes in the cluster. Every cluster also has one Primary Node, which is chosen by ZooKeeper.


Apache Ambari interview Questions & Answers


Ques: 5). What is the role of Apache NiFi in Big Data Ecosystem?

Answer: 

The main roles Apache NiFi is suitable for in BigData Ecosystem are:

Data acquisition and delivery.

Transformations of data.

Routing data from different source to destination.

Event processing.

End to end provenance.

Edge intelligence and bi-directional communication.


Apache Tapestry Interview Questions and Answers


Ques: 6). What are the component of flowfile?

Answer: 

There are two sections to a FlowFile:

Content: The content is a stream of bytes that transports from source to destination and contains a pointer to the actual data being processed in the dataflow. Keep in mind that the flowfile is merely a link to the content data, not the data itself. The actual content will be stored in NiFi's Content Repository.

Attributes: The attributes are key-value pairs that are associated with the data and serve as the flowfile's metadata. These characteristics are typically used to store values that give meaning to the data. Filename, UUID, and other properties are examples. MIME Type, Flowfile creating time etc.


Apache Kafka Interview Questions and Answers


Ques: 7). What exactly is the distinction between MiNiFi and NiFi?

Answer: 

Agents called MiNiFi are used to collect data from sensors and devices in remote areas. The purpose is to assist with data collection's "initial mile" and to obtain data as close to its source as possible.

These devices can include servers, workstations, and laptops, as well as sensors, self-driving cars, factory machinery, and other devices where you want to collect specialised data using MiNiFi's NiFi features. Before transferring data to a destination, the ability to filter, select, and triage it.

The objective of MiNiFi is to manage this entire process at scale with Edge Flow Manager so the Operations or IT teams can deploy different flow definitions and collect any data as the business requires. Here are some details to consider:

To move data around or collect data from well-known external systems like databases, object stores, and so on, NiFi is designed to be centrally situated, usually in a data centre or in the cloud. In a hybrid cloud architecture, NiFi should be viewed as a gateway for moving data back and forth between diverse environments.

MiNiFi connects to a host, does some processing and logic, and only distributes the data you care about to external data distribution platforms. Of course, such systems can be NiFi, but they can also be MQTT brokers, cloud provider services, and so on. MiNiFi also enables use scenarios where network capacity is constrained and data volume transferred over the network must be reduced.

MiNiFi is available in two flavours: C++ and Java. The MiNiFi C++ option has a modest footprint (a few MBs of memory, a small CPU), but a limited number of processors. The MiNiFi Java option is a single-node lightweight version of NiFi that lacks the user interface and clustering capabilities. It does, however, necessitate the presence of Java on the host.


Apache Struts 2 Interview Questions and Answers


Ques: 8). Will we be able to arrange the flow to automotive management after the coordinator is in place?

Answer: 

As Apache NiFi is designed to work on the idea of continuous streaming, the processors are already set for eternity twist by default. Unless we opt to handle a processor without assistance, for example, on an hourly or daily basis today. Apache NiFi, on the other hand, isn't supposed to be a job-oriented matter. When we put a processor in the bureau, it operates all of the time.

 

Apache Tomcat Interview Questions and Answers


Ques: 9). What are the main features of NiFi?

Answer: 

The main features of Apache NiFi are.

Highly Configurable: Apache NiFi is highly flexible in configurations and allows us to decide what kind of configuration we want. For example, some of the possibilities are.

Loss tolerant cs Guaranteed delivery

Low latency vs High throughput

Dynamic prioritization

Flow can be modified at runtime

Back pressure

Designed for extension:We can build our own processors and controllers etc.

Secure:

SSL, SSH, HTTPS, encrypted content etc.

Multi-tenant authorization and internal authorization/policy management

 

Apache Ant Interview Questions and Answers


Ques: 10). Is there a NiFi connector for any RDBMS database?

Answer: 

Yes, different processors included in NiFi can be used to communicate with RDBMS in various ways. For example, "ExecuteSQL" lets you issue a SQL SELECT statement to a configured JDBC connection to retrieve rows from a database; "QueryDatabaseTable" lets you incrementally fetch from a DB table; and "GenerateTableFetch" lets you not only incrementally fetch the records, but also against source table partitions.

 

Apache Camel Interview Questions and Answers


Ques: 11). What is the best way to expose REST API for real-time data collection at scale?

Answer: 

Our customer utilises NiFi to expose a REST API allowing data to be sent to a destination from external sources. HTTP is the most widely used protocol.

If you want to ingest data, you'll utilise the ListenHTTP processor in NIFi, which you may configure to listen to a certain port for HTTP requests and deliver any data to.

Look at the HandleHTTPRequest and HandleHTTPResponse processors if you wish to implement a web service with NiFi. You will receive an HTTP request from an external client if you use the two processors together. You'll be able to respond to the customer with a customised answer/result based on the data in the request. For example, you can use NiFi to connect to remote systems via HTTP, such as an FTP server. The two processors would be used, and the request would be made over HTTP. When NIFi receives a query, it runs a query on the FTP server to retrieve the file, which is then returned to the client.

NiFi can handle all of these one-of-a-kind needs with ease. In this scenario, NiFi would scale horizontally to meet the needs, and a load balancer would be placed in front of the NiFi instances to distribute the load throughout the cluster's NiFi nodes.

 

Apache Cassandra Interview Questions and Answers


Ques: 12). When NiFi pulls data, do the attributes get added to the content (real data)?

Answer: 

You may absolutely add attributes to your FlowFiles at any moment; after all, the purpose of separating metadata from actual data is to allow you to do so. A FlowFile is a representation of an object or a message travelling via NiFi. Each FlowFile has a piece of content, which are the bytes themselves. The properties can then be extracted from the material and stored in memory. You can then use those properties in memory to perform operations without having to touch your content. You can save a lot of IO overhead this way, making the entire flow management procedure much more efficient.

 

Apache NiFi Interview Questions and Answers


Ques: 13). Is it possible for NiFi to link to external sources such as Twitter?

Answer: 

Absolutely. NIFI's architecture is extremely flexible, allowing any developer or user to quickly add a data source connector. We had 170+ processors packaged with the application by default in the previous edition, NIFI 1.0, including the Twitter processor. Every release will very certainly include new processors/extensions in the future.

 

Apache Storm Interview Questions and Answers


Ques: 14). What's the difference between NiFi and Flume cs Sqoop?

Answer: 

NiFi supports all of Flume's use cases and includes the Flume processor out of the box.

Sqoop's features are also supported by NiFi. GenerateTableFetch, for example, is a processor that performs incremental and concurrent fetches against source table partitions.

At the end of the day, we want to know if we're solving a specific or unique use case. If that's the case, any of the tools will suffice. When we consider several use cases being handled at once, as well as essential flow management features like interactive, real-time command and control with full data provenance, NiFi's benefits will really shine.

 

Ques: 15).What happens to data if NiFi goes down?

Answer: 

As data moves through the system, NiFi stores it in the repository. There are three important repositories:

The flowfile repository.

The content repository.

The provenance reposiroty.

When a processor finishes writing data to a flowfile that is streamed directly to the content repository, it commits the session. This updates the provenance repository to include the events that occurred for that processor, and it also updates the flowfile repository to maintain track of where the file is in the flow. Finally, the flowfile can be moved to the flow's next queue.

NiFi will be able to restart where it left off if it goes down at any point. This, however, overlooks one detail: when we update the repositories, we write the into the repository by default, but the OS frequently caches this. If the OS dies together with NiFi in the event of a failure, the cached data may be lost. If we absolutely want to eliminate caching, we can set the nifi.properties file's repositories to always sync to disc. This, on the other hand, can be a severe impediment to performance. If NiFi goes down, it will have no effect on data because the OS will still be responsible for flushing the cached data to the disc.

 

Ques: 16). What Is The Nifi System's Backpressure?

Answer: 

Occasionally, the producer system outperforms the consumer system. As a result, the messages consumed are slower. As a result, all unprocessed communications (FlowFiles) will be stored in the connection buffer. However, you can set a restriction on the magnitude of the connection backpressure based on the number of FlowFiles or the quantity of the data. If it exceeds a predetermined limit, the link will send back pressure to the producing processor, causing it to stop working. As a result, until the backpressure is removed, no new FlowFiles will be generated.

 

Ques: 17). What Is Bulleting In Nifi And How Does It Help?

Answer: 

If you want to know if a dataflow has any issues. You can look through the logs for anything intriguing, but having notifications appear on the screen is far more convenient. A "Bulletin Indicator" will appear in the top-right-hand corner of the Processor if it logs anything as a WARNING or ERROR.

This sign, which resembles a sticky note, will appear for five minutes after the incident has occurred. By hovering over the bulletin, the user can get information about what happened without having to search through log messages. If in a cluster, the bulletin will also indicate which node in the cluster emitted the bulletin. We can also change the log level at which bulletins will occur in the Settings tab of the Configure dialog for a Processor.

 

Ques: 18). When Nifi pulls data, do the attributes get added to the content (real data)?

Answer: 

You may absolutely add attributes to your FlowFiles at any moment; after all, the purpose of separating metadata from actual data is to allow you to do so. A FlowFile is a representation of an object or a message travelling via NiFi. Each FlowFile has a piece of content, which are the bytes themselves. The properties can then be extracted from the material and stored in memory. You can then use those properties in memory to perform operations without having to touch your content. You can save a lot of IO overhead this way, making the entire flow management procedure much more efficient.

 

Ques: 19). What prioritisation scheme is utilised if no prioritizers are set in a processor?

Answer: 

The default priority strategy is described as "undefined," and it is subject to change. If no prioritizers are specified, the processor will order the data using the Content Claim of the FlowFile. It delivers the most efficient data reading and the highest throughput this way. We've debated changing the default setting to First In First Out, but for now, we're going with what works best.

 

Ques: 20). If no prioritizer square measure set in a very processor, what prioritization plot is used?

Answer: 

The default prioritization theme is claimed to be undefined, and it’s going to regulate from time to era. If no prioritizer square measure set, the processor can kind the info supported the FlowFiles Content Claim. This habit provides the foremost economical reading of the info and therefore the highest output. we’ve got mentioned dynamical the default feels to initial In initial Out, however, straight away it’s primarily based happening for what offers the most effective do its stuff.

This square measure a number of the foremost normally used interview queries vis–vis Apache NiFi. To go surfing a lot of terribly regarding Apache NiFi you’ll be able to check the class Apache NiFi and entertain reach purchase the newssheet for a lot of connected articles.