如何取得视频流数据?

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/weixin_39672386/article/details/77540068

 设备网络SDK开发中一些总结

 第一写博客。还真不知道怎么下手,就当是总结吧。谈不上学习。先大概说一下背景, 刚入职,老板安排了一个任务,抛去项目背景,大概意思就是从sdk中取出视频流,然后进行图像处理,然后在以视频的形式播放出。入职不过20天的我,完全没有人带(LZ编程基础并不是很好),所以大多数都是懵的,一遍搜索一遍写。大体上完成以下工作:
  • 取视频流数据,并转为RGB格式。 主要实在MyFRealDataCallBack 中实现预览回调数据
                  public void invoke(NativeLong nPort, ByteByReference pBuffer,NativeLong nSize,  FRAME_INFO frameInfo, NativeLong nReserved1,NativeLong nReserved2) {
NativeLong lFrameType = frameInfo.nType;
HWND hwnd = new HWND(Native.getComponentPointer(panelRealplay));
System.out.println("类型:" + lFrameType.intValue());

if (lFrameType.intValue() == PlayCtrl.T_AUDIO16) {

} else if (lFrameType.intValue() == PlayCtrl.T_YV12) {//海康威视sdk得到是YV12格式
byte[] array = pBuffer.getPointer().getByteArray(0,nSize.intValue());//取出视频流

 DataOutputStream d;
 try {
d = new DataOutputStream(new FileOutputStream( "E:/video/shipin/VideoYV12.yuv"));
 d.write(array);                  //保存得到的视频
d.flush();
 d.close();
 } catch (FileNotFoundException ex) {Logger.getLogger(ClientDemo.class.getName()).log( Level.SEVERE,         null, ex);} catch (IOException ex) {
Logger.getLogger(ClientDemo.class.getName()).log(Level.SEVERE, null, ex);
 }
  • 转换格式    两个函数
private IplImage YV12_ToIplImage(byte[] yv12, int width, int height) {
if (yv12 == null) {
return null;
}


byte[] rgb24 = YV12_TO_RGB24(yv12, width, height);
if (rgb24 == null) {
return null;
}


IplImage image = cvCreateImage(cvSize(width, height), 8, 3);
image.imageData(new BytePointer(rgb24));


return image;
}
private byte[] YV12_TO_RGB24(byte[] array, int width, int height) {
// TODO Auto-generated method stub
if (array == null) {
return null;
}


int nYLen = width * height;
int halfWidth = width >> 1;


if (nYLen < 1 || halfWidth < 1) {
return null;
}

// Convert YV12 to RGB24
byte[] rgb24 = new byte[width * height * 3];
int[] rgb = new int[3];
int i, j, m, n, x, y;
m = -width;
n = -halfWidth;
for (y = 0; y < height; y++) {
m += width;
if (y % 2 != 0) {
n += halfWidth;
}


for (x = 0; x < width; x++) {
i = m + x;
j = n + (x >> 1);
rgb[2] = (int) ((array[i] & 0xFF) + 1.370705 * ((array[nYLen
+ j] & 0xFF) - 128)); // r
rgb[1] = (int) ((array[i] & 0xFF) - 0.698001
* ((array[nYLen + (nYLen >> 2) + j] & 0xFF) - 128) - 0.703125 * ((array[nYLen
+ j] & 0xFF) - 128)); // g
rgb[0] = (int) ((array[i] & 0xFF) + 1.732446 * ((array[nYLen
+ (nYLen >> 2) + j] & 0xFF) - 128)); // b


j = m + x;
i = (j << 1) + j;


for (j = 0; j < 3; j++) {
if (rgb[j] >= 0 && rgb[j] <= 255) {
rgb24[i + j] = (byte) rgb[j];
} else {
rgb24[i + j] = (byte) ((rgb[j] < 0) ? 0 : 255);
}
}
}
}


return rgb24;
}
  • 图像处理后,本打算再一次借用SDK播放库进行播放,所以又将格式变回YV12
private int[] getImagePixel(String image, int width, int height) {
// TODO Auto-generated method stub
int[] intValues = new int[width * height];
File file = new File(image);
BufferedImage bi = null;
try {
bi = ImageIO.read(file);
int wid = bi.getWidth();
int hei = bi.getHeight();
int minx = bi.getMinX();
int miny = bi.getMinY();
for (int y = miny; y < height; y++) {
for (int x = minx; x < width; x++) {
int pixel = bi.getRGB(x, y);
floatValues[y * width + x] = pixel;
}
}
} catch (Exception e) {
e.printStackTrace();
}
return intValues;


}
private void encodeYV12(byte[] yuv, int[] argb, int width, int height) {
// TODO Auto-generated method stub
final int frameSize = width * height;


int yIndex = 0;
int uIndex = frameSize;
int vIndex = frameSize + (frameSize / 4);


int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {


a = (argb[index] & 0xff000000) >> 24; // a is not used obviously

R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;


// well known RGB to YUV algorithm
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;


// YV12 has a plane of Y and two chroma plans (U, V) planes
// each
// sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note
// the
// sampling is every other
// pixel AND every other scanline.
yuv[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv[uIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv[vIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
}


index++;
}
}


}
  

就在这最后一步的时候,就在这胜利在望的时候。。。。。。。。。突然发现并不能播放我们想要的效果。所以各位大神,我应该怎么操作才能,将现在得到的一帧一帧的图片以视频的方式,播放出来,拜托了,急,老板在催,在线等



            

猜你喜欢

转载自blog.csdn.net/weixin_39672386/article/details/77540068
今日推荐