First tab in the Hadoop/Spark window is Hue.
Hue aggregates the most common Apache Hadoop components into a single interface and targets the user experience.
Its main goal is to have the users "just use" Hadoop without worrying about the underlying complexity or using a command line. (Wikipedia)
Second tab is a console that you can use to perform hadoop commands:
All hadoop commands are invoked by the bin/hadoop script.
Running the hadoop script without any arguments prints the description for all commands.
Here on the Spark UI interface you can see all info about running and completed applications and processes
Here you can browse your application history:
Hadoop provides Web UI (along with CLI) for HDFS and YARN Resourse Manager. Here you can see the following overview of your cluster: