I have a simple server that looks like this
public static void main(String[] args) throws IOException {
ServerSocket ss = new ServerSocket(4999);
Socket s = ss.accept();
InputStream is = s.getInputStream();
while (true) {
System.out.println(is.read());
}
}
It accepts a single client socket, reads from it forever and prints out the number that was sent from the client socket.
I have a client like this
public static void main(String[] args) throws IOException, InterruptedException {
int id = Integer.valueOf(args[0]);
Socket s = new Socket("localhost", 4999);
OutputStream os = s.getOutputStream();
while (true) {
os.write(id);
Thread.sleep(1000L);
System.out.println("Sent");
}
}
It connects to the server and sends the number it received as command-line argument forever.
- I start the server.
- I start a client like
java -jar client.jar 123
. - Then I start another client like
java -jar client.jar 234
. - No errors happen on neither the server side nor the client side.
- Each client prints the
Sent
message every 1 second, neither gets blocked.
The server only prints 123
until the end of times.
My questions:
- What happens with the bytes written by the second client?
- I would expect the second client to receive an error or get blocked or something, but nothing happens. Why?
Note: I know that this code is bad and I should handle clients in threads and call ServerSocket.accept()
and all that jazz.
Update:
Based on the accepted answer the solution is to create the server like new ServerSocket(4999, 1);
where 1
is the size of the backlog. 0
would mean to use whatever the default setting is configured in Java.
By using 1
there can be only one connection in a "non-accepted" state. Anymore client trying to connect gets a connection refused!
- The bytes written by the second client will go into the client
Socket
's send buffer, since you're writing to a socket that doesn't have an established connection yet. Eventually the send buffer will fill up. You could try playing withSocket.setSendBufferSize()
to see what happens when it fills up. - A
ServerSocket
has a listen backlog for connections that haven't beenaccept
ed yet. The second client's connection is in the backlog, and if the server would ever get around toaccept
ing it (which it won't, with your code, but there is no way for the client to know that), it would be established and the client's send buffer would be sent merrily along. You could try calling the constructorServerSocket(int port, int backlog)
with a backlog of1
to see what happens to the client when the listen backlog fills up - it should getconnection refused
.