2 | Window to build a stand-alone Hadoop and Spark

Building a stand-alone Hadoop and Spark environment allows you to learn and test the basics of big data processing. Building these two tools on the Windows operating system requires some configuration and setup, here is a detailed tutorial:

Note: Before starting, make sure you have the Java Development Kit (JDK) installed and the latest versions of Hadoop and Spark downloaded. You can get them from the official website or mirror sites.

Step 1: Install and configure Java

  1. Download and install the latest version of Oracle JDK or OpenJDK. You can download it from Oracle's official website or OpenJDK's official website.

  2. Set system environment variables:

    • On the Windows desktop, right-click This PC and select Properties.
    • Click "Advanced system settings".
    • Under the "Advanced" tab, click the "Environment Variables" button.
    • In the System Variables section, click New.
    • Enter the variable name JAVA_HOME, and the variable value is your Java installation path, usually C:\Program Files\Java\jdk1.x.x_xxx.
    • Find the "Path" variable in "System Variables" and click "Edit".
    • Add at the end of the variable value ;%JAVA_HOME%\binand save.


add %JAVA_HOME%\binand.;%JAVA_HOME%\lib\dt.jar;%JAVA_HOME%\lib\tools.ja

Guess you like

Origin blog.csdn.net/weixin_44510615/article/details/132628615