websocket探索其与语音、图片的能力

websocket索求其与语音、图片的技能

2015/12/26 · JavaScript · 3 评论 · websocket

原来的书文出处: AlloyTeam   

提及websocket想比咱们不会面生,假如不熟悉的话也没涉及,一句话回顾

“WebSocket protocol 是HTML5少年老成种新的契约。它实现了浏览器与服务器全双工通信”

WebSocket相相比古板那一个服务器推才能差不离好了太多,大家能够挥手向comet和长轮询那一个手艺说后会有期啦,庆幸大家生活在富有HTML5的一代~

这篇随笔大家将分三有的探寻websocket

首先是websocket的科学普及使用,其次是全然本人制作服务器端websocket,最后是注重介绍利用websocket制作的七个demo,传输图片和在线语音闲聊室,let’s go

豆蔻年华、websocket平淡无奇用法

此处介绍三种自个儿以为大范围的websocket达成……(只顾:本文创立在node上下文情形

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

相信驾驭websocket的同学不容许不知道socket.io,因为socket.io太著名了,也很棒,它自个儿对过期、握手等都做了拍卖。作者揣度那也是贯彻websocket使用最多的艺术。socket.io最最最巧妙的一些正是高贵降级,当浏览器不扶助websocket时,它会在内部温婉降级为长轮询等,顾客和开垦者是不供给关怀具体贯彻的,很有益。

而是事情是有两面性的,socket.io因为它的兼顾也推动了坑的地点,最根本的便是丰腴,它的包装也给多少拉动了非常多的简报冗余,何况高雅降级那黄金时代亮点,也伴随浏览器标准化的开展稳步失去了远大

Chrome Supported in version 4
Firefox Supported in version 4
Internet Explorer Supported in version 10
Opera Supported in version 10
Safari Supported in version 5

在这里边不是指摘说socket.io不好,已经被淘汰了,而是一时候大家也能够设想部分别样的落到实处~

 

2、http模块

无独有偶说了socket.io肥胖,那今后就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

相当的粗略的落实,其实socket.io内部对websocket也是那般完结的,但是前边帮大家封装了生机勃勃部分handle管理,这里大家也得以友善去充足,给出两张socket.io中的源码图

必威官网登录 1

必威官网登录 2

 

3、ws模块

背后有个例证会用到,这里就提一下,前面具体看~

 

二、本身实现生机勃勃套server端websocket

偏巧说了三种家常便饭的websocket实现格局,以往大家构思,对于开荒者来讲

websocket相对于守旧http数据人机联作格局以来,扩大了服务器推送的风浪,客商端选用到事件再拓宽对应管理,开荒起来差异实际不是太大啊

那是因为那个模块已经帮大家将数码帧深入分析那边的坑都填好了,第二有个别大家将尝试本身创设风流浪漫套简便的服务器端websocket模块

谢谢次碳酸钴的切磋辅助,笔者在这里间那部分只是简单说下,假如对此有意思味好奇的请百度【web技术商讨所】

本人姣好服务器端websocket首要有两点,一个是采纳net模块选用数据流,还大概有叁个是对照官方的帧结构图分析数据,完结这两有的就已经实现了全方位的底层工作

首先给三个顾客端发送websocket握手报文的抓包内容

客商端代码很简短

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

必威官网登录 3

服务器端要本着这么些key验证,正是讲key加上叁个一定的字符串后做贰回sha1运算,将其结果转变为base64送重返

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (. )/)[1]; // 连接上WS那些字符串,并做三遍sha1运算,最终转变来Base64 key = crypto.createHash('sha1').update(key WS).digest('base64'); // 输出再次回到给客商端的多少,这几个字段都以必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 那几个字段带上服务器管理后的KEY o.write('Sec-WebSocket-Accept: ' key 'rn'); // 输出空行,使HTTP头结束 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (. )/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: ' key 'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

那般握手部分就已经实现了,后边就是多少帧深入深入分析与转换的活了

先看下官方提供的帧结构暗暗提示图

必威官网登录 4

粗略介绍下

FIN为是还是不是终止的标识

RSV为留下空间,0

opcode标志数据类型,是还是不是分片,是还是不是二进制拆解深入分析,心跳包等等

交给一张opcode对应图

必威官网登录 5

MASK是不是利用掩码

Payload len和前边extend payload length表示数据长度,那些是最辛勤的

PayloadLen唯有7位,换来无符号整型的话独有0到127的取值,这么小的数值当然不可能描述异常的大的数目,因而鲜明当数码长度小于或等于125时候它才作为数据长度的陈述,如若那几个值为126,则时候背后的几个字节来积存数据长度,要是为127则用后边七个字节来囤积数据长度

Masking-key掩码

下边贴出拆解分析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i ] & 15, Mask: e[必威官网登录,i] >> 7, PayloadLength: e[i ] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i ] << 8) e[i ]; } if(frame.PayloadLength === 127) { i = 4; frame.PayloadLength = (e[i ] << 24) (e[i ] << 16) (e[i ] << 8)

  • e[i ]; } if(frame.Mask) { frame.MaskingKey = [e[i ], e[i ], e[i ], e[i ]]; for(j = 0, s = []; j < frame.PayloadLength; j ) { s.push(e[i j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i ] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i ] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i ] << 8) e[i ];
}
 
if(frame.PayloadLength === 127) {
i = 4;
frame.PayloadLength = (e[i ] << 24) (e[i ] << 16) (e[i ] << 8) e[i ];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i ], e[i ], e[i ], e[i ]];
 
for(j = 0, s = []; j < frame.PayloadLength; j ) {
s.push(e[i j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

接下来是转变数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以据守帧结构暗意图上的去管理,在那处不细讲,文章首要在下局地,尽管对那块感兴趣的话能够运动web技能钻探所~

 

三、websocket传输图片和websocket语音闲谈室

正片环节到了,那篇小说最要紧的要么展现一下websocket的有的运用境况

1、传输图片

大家先商讨传输图片的步骤是何等,首先服务器收到到客商端央浼,然后读取图片文件,将二进制数据转载给客商端,客商端如哪个地点理?当然是行使FileReader对象了

先给客商端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataUOdysseyL(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

收到到音讯,然后readAsDataURubiconL,直接将图片base64增添到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i ) { fs.readFile('skyland/' files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i ) {
fs.readFile('skyland/' files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) 2)这一句,这里格外间接把opcode写死了为2,对于Binary Frame,那样客户端接收到数量是不会尝试实行toString的,不然会报错~

代码很简短,在这里处向大家分享一下websocket传输图片的速度怎么着

测验超级多张图纸,总共8.24M

平日静态能源服务器要求20s左右(服务器较远卡塔尔国

cdn需要2.8s左右

那大家的websocket情势啊??!

答案是雷同须求20s左右,是还是不是很深负众望……速度正是慢在传输上,实际不是服务器读取图片,本机上亦然的图片财富,1s左右足以产生……那样看来数据流也望眼欲穿冲破间隔的节制提升传输速度

下边我们来看看websocket的另二个用法~

 

用websocket搭建语音闲聊室

先来收拾一下口音闲聊室的死守

客户踏入频道随后从Mike风输入音频,然后发送给后台转载给频道里面包车型大巴其余人,别的人接受到消息举行广播

看起来困难在八个地点,第八个是音频的输入,第二是收到到多少流实行播报

先说音频的输入,这里运用了HTML5的getUserMedia方法,可是注意了,这么些法子上线是有深井的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

第叁个参数是{audio: true},只启用音频,然后创设了四个SRecorder对象,后续的操作基本上都在此个指标上举办。当时只要代码运营在本地的话浏览器应该提示您是还是不是启用Mike风输入,分明现在就运行了

接下去大家看下SRecorder构造函数是吗,给出首要的局地

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪oContext是叁个节奏上下文对象,有做过声音过滤管理的同窗应该精晓“黄金时代段音频达到扬声器进行播放早前,半路对其展开拦截,于是大家就赢得了拍子数据了,这么些拦截工作是由window.奥迪(Audi卡塔 尔(英语:State of Qatar)oContext来做的,大家全体对旋律的操作都依据这几个指标”,大家可以透过奥迪oContext创制分裂的奥迪oNode节点,然后增加滤镜播放特别的鸣响

录音原理同样,大家也亟需走奥迪oContext,然则多了一步对Mike风音频输入的选择上,并不是像往常管理音频一下用ajax伏乞音频的ArrayBuffer对象再decode,迈克风的收受须求用到createMediaStreamSource方法,注意那么些参数正是getUserMedia方法首个参数的参数

并且createScriptProcessor方法,它官方的表明是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

满含下正是那个法子是行使JavaScript去管理音频搜罗操作

终于到点子搜聚了!胜利就在前面!

接下去让大家把Mike风的输入和旋律搜聚相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方表达如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重返代表在条件中的音频的终极目标地。

好,到了当时,我们还索要三个监听音频采撷的风浪

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是四个目的,那一个是在英特网找的,笔者就加了八个clear方法因为背后会用到,首要有不行encodeWAV方法超赞,旁人进行了再三的节奏压缩和优化,这些最后会伴随完整的代码一齐贴出来

那个时候一切顾客步向频道随后从Mike风输入音频环节就早就到位啦,上边就该是向服务器端发送音频流,微微有一些蛋疼的来了,刚才大家说了,websocket通过opcode不一致能够代表回去的数量是文件依然二进制数据,而大家onaudioprocess中input进去的是数组,最后播放音响须求的是Blob,{type: ‘audio/wav’}的靶子,这样大家就非得要在发送在此以前将数组调换到WAV的Blob,那个时候就用到了上边说的encodeWAV方法

服务器就像是比相当的粗略,只要转载就能够了

地面测量试验确实能够,而是天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提示笔者必须要在二个改变局面的条件,约等于索要https,那象征ws也非得换到wss……故此服务器代码就从未有过动用大家和好包裹的拉手、剖判和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码仍然非常轻易的,使用https模块,然后用了起初说的ws模块,userMap是模拟的频段,只兑现转载的主干职能

使用ws模块是因为它极其https实现wss实乃太有利了,和逻辑代码0冲突

https的搭建在此地就不提了,主要是亟需私钥、CS奇骏证书签字和证书文件,感兴趣的同校能够精通下(可是不打听的话在现网蒙受也用持续getUserMedia……卡塔 尔(阿拉伯语:قطر‎

上面是全部的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入顾客名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的器材无瑞典语音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , input萨姆pleRate: context.sampleRate //输入采集样本率 , inputSampleBits: 16 //输入采集样板数位 8, 16 , outputSampleRate: config.sampleRate //输出采样率 , outut萨姆pleBits: config.sampleBits //输出采集样本数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size = data.length; } , compress: function () { //合併压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i ) { data.set(this.buffer[i], offset); offset = this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j = compression; index ; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i ) { data.setUint8(offset i, str.charCodeAt(i)); } }; // 财富调换文件标志符 writeString('HavalIFF'); offset = 4; // 下个地点伊始到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 dataLength, true); offset = 4; // WAV文件注明 writeString('WAVE'); offset = 4; // 波形格式标识 writeString('fmt '); offset = 4; // 过滤字节,经常为 0x10 = 16 data.setUint32(offset, 16, true); offset = 4; // 格式体系 (PCM方式采集样板数据) data.setUint16(offset, 1, true); offset = 2; // 通道数 data.setUint16(offset, channelCount, true); offset = 2; // 采样率,每秒样板数,表示每种通道的广播速度 data.setUint32(offset, sampleRate, true); offset = 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样品数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset = 4; // 快数据调治数 采集样板一回占用字节数 单声道×每样品的数额位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset = 2; // 每样板数量位数 data.setUint16(offset, sampleBits, true); offset = 2; // 数据标记符 writeString('data'); offset = 4; // 采集样板数据总的数量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset = 4; // 写入采集样本数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i , offset ) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i , offset = 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size = data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i ) {
                data.set(this.buffer[i], offset);
                offset = this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j = compression;
                index ;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i ) {
                    data.setUint8(offset i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset = 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 dataLength, true); offset = 4;
            // WAV文件标志
            writeString('WAVE'); offset = 4;
            // 波形格式标志
            writeString('fmt '); offset = 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset = 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset = 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset = 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset = 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset = 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset = 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset = 2;
            // 数据标识符
            writeString('data'); offset = 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset = 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i , offset ) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i , offset = 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

友善有尝试不按键实时对讲,通过setInterval发送,但开采杂音有一点重,效果不好,这几个必要encodeWAV再黄金时代层的包装,多去除遭遇杂音的魔法,本身筛选了更进一层方便人民群众的按钮说话的形式

 

这篇文章里第黄金年代瞻望了websocket的前程,然后依照正规大家本身尝尝解析和扭转数据帧,对websocket有了越来越深一步的垂询

最后经过八个demo看见了websocket的潜在的能量,关于语音闲谈室的demo涉及的较广,未有接触过奥迪oContext对象的同校最佳先通晓下奥迪oContext

小说到此处就得了啦~有何主见和主题素材接待大家建议来一齐谈谈查究~

 

1 赞 11 收藏 3 评论

必威官网登录 6

本文由必威官网登录发布于WEB前端,转载请注明出处:websocket探索其与语音、图片的能力