java8 Stream of principle

  Stream
  
  java8 the Stream is very important, spring-reactor which used reactor-core, and java8 the stream of which is similar to get to know the look reactor-core must do more with less.
  
  Look at its strong, there is just tip of the iceberg:
  
  remove the name from the List <Student> list, the name composed of a List.
  
  Old Code
  
  List <String> = new new nameList the ArrayList ();
  
  IF (= null List!) {
  
  For (Student STU: List) {
  
  nameList.add (stu.getName ());
  
  }
  
  }
  
  JAVA8
  
  List <String> = Optional The nameList . .ofNullable (List) .orElse (Collections.emptyList ()) Stream ()
  
  .map (Stu :: getName) .collect (Collectors.toList ());
  
  Stream.of create Stream
  
  we are here to show you by Stream.of Creating Stream.
  
  Common collection by stream () method can be created Stream. In fact, they are ultimately call the following method created.
  
  public static <T> Stream <T > stream (Spliterator <T> spliterator,
  
  Objects.requireNonNull (spliterator);
  
  return new new ReferencePipeline.Head <> (spliterator,
  
  StreamOpFlag.fromCharacteristics (spliterator),
  
  Parallel);
  
  }
  
  Stream.of There are two ways to create a Stream.
  
  The first
  
  Stream.of ( "a1")
  
  second
  
  Stream.of ( "a1", "a2 "); // this is constructed by Arrays.stream
  
  presented here two related classes:
  
  If a single element, directly use Spliterator build. If multiple elements, there will be an optimization, use SpineBuffer building.
  
  If you have a large array, use SpineBuffer, a small array is to use the ArrayList. How to use SpineBuffer build?
  
  Stream.builder () the Add ( "a1") the Add ( "a2") Build ();...
  
  Stream concepts
  
  stream operations are divided into two types:
  
  one is the middle of the operation, that does not need to result only need to record this process, the general returned Stream object belong to this
  
  one is the ultimate operation is needed immediately return a result, the general non-return Stream objects belong to this.
  
  stream state divided into three types:
  
  first: Head,
  
  Second: Stateless, stateless, the operation of each object are independent.
  
  Third: Stateful, there is a state, like the need to join in order to obtain more results.
  
  stream operating characteristics:
  
  Operating characteristics refer to: the stream has a fixed size, the size is not fixed, ordered operations, order data and the like.
  
  Stream.filter
  
  name implies: Stream were to filter, and then returns the new Stream. We know from the previous one, stream-specific data stored in Spliterator in. Which itself can be understood as just an algorithm.
  
  only an intermediate filter operation, we only need to record this process on OK. And then return the new Stream. If you call fileter again, a new Stream will return again.
  
  The above is a flow chart, is a packaging Sink operator, such as call filter, to get the object from the Head inside, through a first Sink, then after the second operation Sink, to give the final result.
  
  The following is achieved Strea.filter source:
  
  public Stream Final <P_out> filter (the Predicate the predicateA <Super P_out?>) {
  
  Objects.requireNonNull (the predicateA);
  
  return new new StatelessOp <P_out, P_out> www.frgjyL.cn (the this, StreamShape .Reference,
  
  StreamOpFlag.NOT_SIZED) {
  
  @Override
  
  Sink <P_out> opWrapSink (int the flags, Sink <P_out> sink) {
  
  return new new Sink.ChainedReference <P_out, P_out> (sink) {
  
  @Override
  
  public void the begin (Long size) {
  
  downstream.begin (-1);
  
  }
  
  @ override
  
  public void Accept (P_out U) {
  
  // current filter, if the operator entered through a
  
  IF (predicate.test (U))
  
  downstream.accept (U);
  
  }
  
  };
  
  }
  
  };
  
  }
  
  Stream.peek
  
  this method may be understood as debugging method, it does not have any impact on the results, the data of a next pass intact operator
  
  public Stream Final <P_out> PEEK (Consumer <Super P_out?> Action) {
  
  Objects.requireNonNull (Action);
  
  return StatelessOp new new <P_out, P_out> (the this, StreamShape.REFERENCE,
  
  0) {
  
  @Override
  
  Sink <P_out> opWrapSink (int the flags, www.guochengzy.com Sink <P_out> sink) {
  
  return new new Sink.ChainedReference <P_out, P_out> (sink) {
  
  @Override
  
  public void Accept (P_out U) {
  
  action.accept (U );
  
  downstream.accept (U);
  
  }
  
  };
  
  }
  
  };
  
  }
  
  Stream.flatMap
  
  operator should be reflected by an object into a body Stream, then call foreach, each element will pass to the next operator.
  
  Final public <R & lt> Stream <R & lt> flatMap (Function <Super P_out, the extends Stream <>> R & lt Mapper the extends???) {
  
  Objects.requireNonNull (Mapper);
  
  // We do Better Within last the this CAN, by Polling cancellationRequested When Stream Infinite IS
  
  return new new StatelessOp <P_out, R & lt> www.yongshiyule178.com (the this, StreamShape.REFERENCE,
  
  StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT | StreamOpFlag.NOT_SIZED) {
  
  @Override
  
  Sink<P_OUT> opWrapSink(int flags, Sink<R> sink) {
  
  return new Sink.ChainedReference<P_OUT, R>(sink) {
  
  @Override
  
  public void begin(long size) {
  
  downstream.begin(-1);
  
  }
  
  @Override
  
  public void accept(P_OUT u) {
  
  try (Stream<? extends R> result = mapper.apply(u)) {
  
  // We can do better that this too; optimize for depth=0 case and just grab spliterator and forEach it
  
  if (result !www.honglpt.cn= null)
  
  result.sequential().forEach(downstream);
  
  Stream.map
  
  与上面的类似,只是映射成另一个对象
  
  Final public <R & lt> Stream <R & lt> Map (Function Mapper <Super P_out, the extends R & lt??>) {
  
  Objects.requireNonNull (Mapper);
  
  return new new StatelessOp <P_out, R & lt> www.cmyLgw.cn (the this, StreamShape.REFERENCE ,
  
  StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT) {
  
  @Override
  
  sink <P_out> opWrapSink (int the flags, sink <R & lt> sink) {
  
  return new new Sink.ChainedReference <P_out, R & lt> (sink) {
  
  @Override
  
  public void Accept (U P_out ) {
  
  downstream.accept (mapper.apply (U));
  
  }
  
  };
  
  }
  
  };
  
  }
  
  Stream.limit
  
  this is a state of an operation, since it must return data composed of data Stream. Here only Posted a core algorithms:
  
  Sink <T> opWrapSink (int the flags, Sink <T> sink) {
  
  return new new Sink.
  
  Skip = n-Long;
  
  ? m = Long limit> = 0 limit: of Long.MAX_VALUE;
  
  @Override
  
  public void the begin (Long size) {
  
  downstream.begin (CalcSize (size, Skip, m));
  
  }
  
  @Override
  
  public void Accept ( www.chengmyuLegw.cn T) {
  
  IF (n-== 0) {
  
  IF (m> 0) {
  
  M--;
  
  downstream.accept (T);
  
  }
  
  }
  
  the else {
  
  N--;
  
  }
  
  }
  
  @Override
  
  public Boolean cancellationRequested ( ) {
  
  return m == 0 | www.huishenggw.cn | downstream.cancellationRequested ();
  
  }
  
  };
  
  }
  
  Stream.skip
  
  this and similar Stram.limit, two together can query the facet.
  
  Stream.sorted
  
  Sort, if not pass the comparator to use the default.
  
  If there is order, you do not sort of, if given the size of the array will use a fixed-size ordered otherwise by a column to be sorted.
  
  Sink public <T> opWrapSink (int the flags, Sink <T> sink) {
  
  Objects.requireNonNull (sink);
  
  // The INPUT IS already the If Naturally the this and the sorted Operation
  
  // Also the then the this Naturally the sorted IS A NO-OP
  
  IF ( StreamOpFlag.SORTED.isKnown (the flags) && isNaturalSort)
  
  return sink;
  
  the else IF (StreamOpFlag.SIZED.isKnown (the flags))
  
  return new new SizedRefSortingSink <> (sink, Comparator);
  
  the else
  
  return new new RefSortingSink <> (sink, Comparator);
  
  }
  
  by sorting, paging, indicating that the operator needs to support start, end method. When needed a method to cancel, why, for example, there are 20 first Stream object, but need only behind the first one, so my first operator to give you a data, you need to first operator terminated .
  
  Stream.anyMatch
  
  Let's look at how to achieve a anyMatch.
  
  @Override
  
  public Boolean AnyMatch Final (the Predicate the predicateA <Super P_out?>) {
  
  Return the evaluate (MatchOps.makeRef (the predicateA, MatchOps.MatchKind.ANY));
  
  }
  
  The second step, mainly by stream, the original data and the current container spliterator
  
  Final <R & lt> the evaluate R & lt (terminalOp <E_OUT, R & lt> terminalOp) {
  
  Assert getOutputShape () == terminalOp.inputShape ();
  
  IF (linkedOrConsumed)
  
  the throw new new IllegalStateException (MSG_STREAM_LINKED);
  
  linkedOrConsumed = to true;
  
  return isParallel ()
  
  terminalOp?. evaluateParallel (the this, sourceSpliterator (terminalOp.getOpFlags ()))
  
  : terminalOp.evaluateSequential (the this, sourceSpliterator (terminalOp.getOpFlags ()));
  
  }
  
  third step, the last original container and operator
  
  @Override
  
  public <S> Boolean evaluateSequential(PipelineHelper<T> helper,
  
  Spliterator<S> spliterator) {
  
  return helper.wrapAndCopyInto(sinkSupplier.get(), spliterator).getAndClearState();
  
  }
  
  第四步 包装算子
  
  final <P_IN> Sink<P_IN> wrapSink(Sink<E_OUT> sink) {
  
  Objects.requireNonNull(sink);
  
  for ( @SuppressWarnings("rawtypes") AbstractPipeline p=AbstractPipeline.this; p.depth > 0; p=p.previousStage) {
  
  sink = p.opWrapSink(p.previousStage.combinedFlags, sink);
  
  }
  
  return (Sink<P_IN>) sink;
  
  }
  
  第五步 数据传递
  
  @Override
  
  final <P_IN, S extends Sink<E_OUT>> S wrapAndCopyInto(S sink, Spliterator<P_IN> spliterator) {
  
  copyInto (wrapSink (Objects.requireNonNull (sink)), spliterator);
  
  return sink;
  
  }
  
  Final <P_IN> void copyInto (Sink <P_IN> wrappedSink, Spliterator <P_IN> spliterator) {
  
  Objects.requireNonNull (wrappedSink);
  
  // meet the requirements after calculation of the need to stop
  
  iF (StreamOpFlag.SHORT_CIRCUIT.isKnown (getStreamAndOpFlags ())!) {
  
  wrappedSink.begin (spliterator.getExactSizeIfKnown ());
  
  spliterator.forEachRemaining (wrappedSink);
  
  wrappedSink.end ();
  
  }
  
  the else {
  
  // necessary to stop the calculation
  
  copyIntoWithCancel (wrappedSink, spliterator);
  
  }
  
  }
  
  Stream.spliterator
  
  only a Sink, and then call wrapSink, then copyInto can be achieved by
  
  final <P_IN> Spliterator <P_OUT> wrap (PipelineHelper <P_OUT> ph,
  
  Supplier<Spliterator<P_IN>> supplier,
  
  boolean isParallel) {
  
  return new StreamSpliterators.WrappingSpliterator<>(ph, supplier, isParallel);

Guess you like

Origin www.cnblogs.com/qwangxiao/p/10959562.html